BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Fundamental Limits of Learning with Feedforward and Recurrent Neur
 al Networks - Prof. Helmut Bolcskei\, ETH Zurich
DTSTART:20220527T110000Z
DTEND:20220527T120000Z
UID:TALK171458@talks.cam.ac.uk
CONTACT:Prof. Ramji Venkataramanan
DESCRIPTION:Deep neural networks have led to breakthrough results in numer
 ous practical machine learning tasks such as image classification\, image 
 captioning\, control-policy-learning to play the board game Go\, and most 
 recently the prediction of protein structures. In this lecture\, we will a
 ttempt  to understand some of the structural and mathematical reasons driv
 ing these successes. Specifically\, we study what is possible in principle
  if no constraints are imposed on the learning algorithm and on the amount
  and quality of training data. The guiding theme will be a relation betwee
 n the complexity of the objects to be learned and the networks approximati
 ng them\, with the central result stating that universal Kolmogorov-optima
 lity is achieved by feedforward neural networks in function learning and b
 y recurrent neural networks in dynamical system learning. 
LOCATION:Department of Engineering - LT1
END:VEVENT
END:VCALENDAR
