BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Deep learning as optimal control problems and Riemannian discrete 
 gradient descent. - Elena Celledoni (Norwegian University of Science and T
 echnology\; Norwegian University of Science and Technology)
DTSTART:20191121T150500Z
DTEND:20191121T154500Z
UID:TALK135052@talks.cam.ac.uk
CONTACT:INI IT
DESCRIPTION:We consider recent work where deep learning neural networks ha
 ve been interpreted as discretisations of an optimal control problem subje
 ct to an ordinary differential equation constraint. We review the first or
 der conditions for optimality\, and the conditions ensuring optimality aft
 er discretisation. This leads to a class of algorithms for solving the dis
 crete optimal control problem which guarantee that the corresponding discr
 ete necessary conditions for optimality are fulfilled. The differential eq
 uation setting lends itself to learning additional parameters such as the 
 time discretisation. We explore this extension alongside natural constrain
 ts (e.g. time steps lie in a simplex). We compare these deep learning algo
 rithms numerically in terms of induced flow and generalisation ability.  &
 nbsp\;  References  &nbsp\;  <span>- <span>M Benning\, E Celledoni\, MJ Eh
 rhardt\, B Owren\, CB Sch&ouml\;nlieb<i>\, </i></span><a target="_blank" r
 el="nofollow"><i>Deep learning as optimal control problems: models and num
 erical methods</i></a><i>\,</i> JCD.</span>  <br><br><br><br><br>
LOCATION:Seminar Room 2\, Newton Institute
END:VEVENT
END:VCALENDAR
