Safe Learning: How to Modify Bayesian Inference when All Models are Wrong
- ๐ค Speaker: Peter Grรผnwald, Centrum voor Wiskunde en Informatica, Amsterdam
- ๐ Date & Time: Friday 10 February 2012, 16:00 - 17:00
- ๐ Venue: MR12, CMS, Wilberforce Road, Cambridge, CB3 0WB
Abstract
Standard Bayesian inference can behave suboptimally if the model under consideration is wrong: in some simple settings, the posterior may fail to concentrate even in the limit of infinite sample size. We introduce a test that can tell from the data whether we are in such a situation. If we are, we can adjust the learning rate (equivalently: make the prior lighter-tailed) in a data-dependent way. The resulting “safe” estimator continues to achieve good rates with wrong models. When applied to classification problems, the safe estimator achieves the optimal rates for the Tsybakov exponent of the underlying distribution, thereby establishing a connection between Bayesian inference and statistical learning theory.
Series This talk is part of the Statistics series.
Included in Lists
- All CMS events
- All Talks (aka the CURE list)
- bld31
- Cambridge Forum of Science and Humanities
- Cambridge Language Sciences
- Cambridge talks
- Chris Davis' list
- CMS Events
- custom
- DPMMS info aggregator
- DPMMS lists
- DPMMS Lists
- Guy Emerson's list
- Hanchen DaDaDash
- Interested Talks
- Machine Learning
- MR12, CMS, Wilberforce Road, Cambridge, CB3 0WB
- rp587
- School of Physical Sciences
- Statistical Laboratory info aggregator
- Statistics
- Statistics Group
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)


Friday 10 February 2012, 16:00-17:00