Towards a better understanding of early stopping for boosting algorithms
- đ¤ Speaker: Yuting Wei, Stanford University
- đ Date & Time: Friday 02 November 2018, 16:00 - 17:00
- đ Venue: MR12
Abstract
In this talk, I will discuss the behaviour of boosting algorithm for non-parametric regression. While non-parametric models offer great flexibility, they can lead to overfitting and thus poor generalisation performance. For this reason, procedures for fitting these models must involve some form of regularisation. Although early-stopping of iterative algorithms is a widely-used form of regularisation in statistics and optimisation, it is less well-understood than its analogue based on penalised regularisation. We exhibit a direct connection between a stopped iterate and the localised Gaussian complexity of the associated function class which allows us to derive explicit and optimal stopping rules. We will discuss such stopping rules in detail for various reproducing kernel Hilbert spaces, and also extend these insights to broader classes of functions.
Series This talk is part of the Statistics series.
Included in Lists
- All CMS events
- All Talks (aka the CURE list)
- bld31
- Cambridge Forum of Science and Humanities
- Cambridge Language Sciences
- Cambridge talks
- Chris Davis' list
- CMS Events
- custom
- DPMMS info aggregator
- DPMMS lists
- DPMMS Lists
- Guy Emerson's list
- Hanchen DaDaDash
- Interested Talks
- Machine Learning
- MR12
- rp587
- School of Physical Sciences
- Statistical Laboratory info aggregator
- Statistics
- Statistics Group
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)

Yuting Wei, Stanford University
Friday 02 November 2018, 16:00-17:00