Representation, optimization and generalization properties of deep neural networks
- đ¤ Speaker: Peter Bartlett (University of California, Berkeley)
- đ Date & Time: Wednesday 27 June 2018, 11:45 - 12:30
- đ Venue: Seminar Room 1, Newton Institute
Abstract
Deep neural networks have improved the state-of-the-art performance for prediction problems across an impressive range of application areas. This talk describes some recent results in three directions. First, we investigate the impact of depth on representational properties of deep residual networks, which compute near-identity maps at each layer, showing how their representational power improves with depth and that the functional optimization landscape has the desirable property that stationary points are optimal. Second, we study the implications for optimization in deep linear networks, showing how the success of a family of gradient descent algorithms that regularize towards the identity function depends on a positivity condition of the regression function. Third, we consider how the performance of deep networks on training data compares to their predictive accuracy, we demonstrate deviation bounds that scale with a certain “spectral complexity,” and we compare the behavior of these bounds with the observed performance of these networks in practical problems.
Joint work with Steve Evans, Dylan Foster, Dave Helmbold, Phil Long, and Matus Telgarsky.
Series This talk is part of the Isaac Newton Institute Seminar Series series.
Included in Lists
- All CMS events
- bld31
- dh539
- Featured lists
- INI info aggregator
- Isaac Newton Institute Seminar Series
- School of Physical Sciences
- Seminar Room 1, Newton Institute
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)

Peter Bartlett (University of California, Berkeley)
Wednesday 27 June 2018, 11:45-12:30