Safe Learning: How to Modify Bayesian Inference when All Models are Wrong
- đ¤ Speaker: Peter Grunwald, Centrum voor Wiskunde en Informatica, Amsterdam
- đ Date & Time: Thursday 20 October 2011, 16:00 - 17:00
- đ Venue: MR12, CMS, Wilberforce Road, Cambridge, CB3 0WB
Abstract
Standard Bayesian inference can behave suboptimally if the model under consideration is wrong: in some simple settings, the posterior may fail to concentrate even in the limit of infinite sample size. We introduce a test that can tell from the data whether we are in such a situation. If we are, we can adjust the learning rate (equivalently: make the prior lighter-tailed) in a data-dependent way. The resulting “safe” estimator continues to achieve good rates with wrong models. When applied to classification, the approach achieves optimal rates under Tsybakov’s conditions, thereby creating a bridge between Bayes/MDL and statistical learning-style inference.
Series This talk is part of the Statistics series.
Included in Lists
This talk is not included in any other list.
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)


Thursday 20 October 2011, 16:00-17:00