BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Can neural networks always be trained? On the boundaries of deep l
 earning - Matthew Colbrook (University of Cambridge)
DTSTART:20190501T150000Z
DTEND:20190501T160000Z
UID:TALK124339@talks.cam.ac.uk
CONTACT:59181
DESCRIPTION:Deep learning has emerged as a competitive new tool in image r
 econstruction. However\, recent results demonstrate such methods are typic
 ally highly unstable - tiny\, almost undetectable perturbations cause seve
 re artefacts in the reconstruction\, a major concern in practice. This is 
 paradoxical given the existence of stable state-of-the-art methods for the
 se problems. Thus\, approximation theoretical results non-constructively i
 mply the existence of stable and accurate neural networks. Hence the funda
 mental question: Can we explicitly construct/train stable and accurate neu
 ral networks for image reconstruction? I will discuss two results in this 
 direction. The first is a negative result\, saying such constructions are 
 in general impossible\, even given access to the solutions of common optim
 isation algorithms such as basis pursuit. The second is a positive result\
 , saying that under sparsity assumptions\, such neural networks can be con
 structed. These neural networks are stable and theoretically competitive w
 ith state-of-the-art results from other methods. Numerical examples of com
 petitive performance are also provided.
LOCATION:MR14\, Centre for Mathematical Sciences
END:VEVENT
END:VCALENDAR
