BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Using gradient descent for optimization and learning - Nicolas Le 
 Roux (Microsoft Research)
DTSTART:20090526T120000Z
DTEND:20090526T140000Z
UID:TALK18624@talks.cam.ac.uk
CONTACT:Simon Lacoste-Julien
DESCRIPTION:In machine learning\, to solve a particular task\, we often de
 fine a cost function which we are trying to minimize over a set of data po
 ints\, the training set. There has been extensive work on designing effici
 ent optimization techniques to minimize such functions\, amongst them BFGS
  and conjugate gradient descent. However\, this only ensures a fast decrea
 se in the training error\, whereas we are ultimately interested in minimiz
 ing another error: the test error. Reaching a low test error for a particu
 lar problem is called learning. As Bottou has shown\, good optimizers may 
 prove to be very bad learning methods and vice-versa.\n\nAfter reviewing t
 he most popular optimization techniques\, I shall make a brief summary of 
 Bottou's conclusions and then present a new gradient descent algorithm whi
 ch aims at directly addressing this issue of optimizing the test error usi
 ng only a training set.\n\n\nSpeaker's bio:\n\nNicolas received a Master's
  Degree in Applied Maths from the Ecole Centrale Paris and one in Maths\, 
 Vision and Learning from the ENS Cachan in 2003. From 2004 to 2008\, he di
 d a PhD in Montreal under the supervision of Yoshua Bengio\, working on de
 signing and optimizing neural networks. Since 2008\, he is a postdoc resea
 rcher at Microsoft Research Cambridge\, working with John Winn on using de
 ep neural networks for vision.
LOCATION:Engineering Department\, CBL Room 438
END:VEVENT
END:VCALENDAR
