BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:The troublesome kernel — On AI generated hallucinations in deep 
 learning for inverse problems - Nina Maria Gottschling (Cambridge Centre f
 or Analysis\, University of Cambridge\, Cambridge Centre for Analysis)
DTSTART:20211130T090000Z
DTEND:20211130T100000Z
UID:TALK166528@talks.cam.ac.uk
DESCRIPTION:There is overwhelming empirical evidence that Deep Learning (D
 L) leads to unstable methods in applications ranging from image classifica
 tion and computer vision to voice recognition and automated diagnosis in m
 edicine. Recently\, a similar instability phenomenon has been discovered w
 hen DL is used to solve certain problems in computational science\, namely
 \, inverse problems in imaging. The talk presents a comprehensive mathemat
 ical analysis explaining the many facets of the instability phenomenon in 
 DL for inverse problems. These instabilities in particular also include fa
 lse positives and negatives as well as AI hallucinations. Furthermore\, th
 e results indicate how training typically encourages AI hallucinations and
  instabilities.
LOCATION:Seminar Room 2\, Newton Institute
END:VEVENT
END:VCALENDAR
