BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Stochastic optimization and adaptive learning rates - Yingzhen Li 
 (University of Cambridge)\; Mark Rowland
DTSTART:20151126T143000Z
DTEND:20151126T160000Z
UID:TALK62020@talks.cam.ac.uk
CONTACT:Yingzhen Li
DESCRIPTION:Stochastic optimization is prevalent in modern machine learnin
 g\, and the main purpose of this talk is to understand why it works. We wi
 ll first briefly recap the history of stochastic approximation methods\, s
 tarting from the famous Robbins and Monro paper. Then we will introduce th
 e cost function minimization problem in machine learning context and show 
 you how to prove the convergence of stochastic gradient descent to a local
  optima. We proceed the proof in three steps: continuous gradient descent\
 , discrete gradient descent\, and stochastic gradient descent. However the
  conditions for learning rates presented in the proof is not necessary. So
  in the second part of the talk we will discuss popular adaptive learning 
 rates\, and in particular we will do a short tutorial on online learning\,
  to give people intuitions on the regret bounds. Finally we will have a li
 ve demo session on comparing different learning rates. 
LOCATION:Engineering Department\, CBL Room 438
END:VEVENT
END:VCALENDAR
