BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Large-scale convex optimization for machine learning - Francis Bac
 h\, INRIA\, Laboratoire d'Informatique de l'Ecole Normale Supérieure
DTSTART:20120504T150000Z
DTEND:20120504T160000Z
UID:TALK35760@talks.cam.ac.uk
CONTACT:Richard Samworth
DESCRIPTION:Many machine learning and signal processing problems are tradi
 tionally\ncast as convex optimization problems. A common difficulty in sol
 ving\nthese problems is the size of the data\, where there are many\nobser
 vations ("large n") and each of these is large ("large p"). In\nthis setti
 ng\, online algorithms which pass over the data only once\,\nare usually p
 referred over batch algorithms\, which require multiple\npasses over the d
 ata. In this talk\, I will present several recent\nresults\, showing that 
 in the ideal infinite-data setting\, online\nlearning algorithms based on 
 stochastic approximation should be\npreferred (both in terms of running sp
 eed and generalization\nperformance)\, but that in the practical finite-da
 ta setting\, an\nappropriate combination of batch and online algorithms le
 ads to\nunexpected behaviors\, such as a linear convergence rate with an\n
 iteration cost similar to stochastic gradient descent.\n\nThis is joint wo
 rk with Nicolas Le Roux\, Eric Moulines and Mark Schmidt.
LOCATION:MR12\, CMS\, Wilberforce Road\, Cambridge\, CB3 0WB
END:VEVENT
END:VCALENDAR
