BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Coin Betting for Backprop without Learning Rates and More  - Franc
 esco Orabona\, Stony Brook University
DTSTART:20170824T120000Z
DTEND:20170824T130000Z
UID:TALK73851@talks.cam.ac.uk
CONTACT:Microsoft Research Cambridge Talks Admins
DESCRIPTION:Deep learning methods achieve state-of-the-art performance in 
 many application scenarios. Yet\, these methods require a significant amou
 nt of hyperparameters tuning in order to achieve the best results. In part
 icular\, tuning the learning rates in the stochastic optimization process 
 is still one of the main bottlenecks.\nIn this talk\, I will propose a new
  stochastic gradient descent procedure that does not require any learning 
 rate setting. Contrary to previous methods\, we do not adapt the learning 
 rates nor we make use of the assumed curvature of the objective function. 
 Instead\, we reduce the optimization process to a game of betting on a non
 -stochastic coin and we propose an optimal strategy based on a generalizat
 ion of Kelly betting. Moreover\, I'll show how this reduction can be also 
 used for other machine learning problems.\nTheoretical convergence is prov
 en for convex and quasi-convex functions and empirical evidence shows the 
 advantage of our algorithm over popular stochastic gradient algorithms \n
LOCATION:Auditorium\, Microsoft Research Ltd\, 21 Station Road\, Cambridge
 \, CB1 2FB
END:VEVENT
END:VCALENDAR
