On Gradient-Based Optimization: Accelerated, Stochastic and Nonconvex
- đ¤ Speaker: Michael Jordan (University of California, Berkeley)
- đ Date & Time: Monday 15 January 2018, 10:00 - 10:45
- đ Venue: Seminar Room 1, Newton Institute
Abstract
Many new theoretical challenges have arisen in the area of gradient-based optimization for large-scale statistical data analysis, driven by the needs of applications and the opportunities provided by new hardware and software platforms. I discuss several related, recent results in this area: (1) a new framework for understanding Nesterov acceleration, obtained by taking a continuous-time, Lagrangian/Hamiltonian/symplectic perspective, (2) a discussion of how to escape saddle points efficiently in nonconvex optimization, and (3) the acceleration of Langevin diffusion.
Series This talk is part of the Isaac Newton Institute Seminar Series series.
Included in Lists
- All CMS events
- bld31
- dh539
- Featured lists
- INI info aggregator
- Isaac Newton Institute Seminar Series
- School of Physical Sciences
- Seminar Room 1, Newton Institute
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)

Michael Jordan (University of California, Berkeley)
Monday 15 January 2018, 10:00-10:45