BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Learning rates in Bayesian nonparametrics: Gaussian process priors
  - Aad van der Vaart (Vrije Univ.  Amsterdam)
DTSTART:20091127T160000Z
DTEND:20091127T170000Z
UID:TALK20013@talks.cam.ac.uk
CONTACT:Richard Nickl
DESCRIPTION:joint with Probability Series\n\nThe sample path of a Gaussian
  process can be used as a prior model\nfor an unknown function that we wis
 h to estimate. For instance\, one might\nmodel a regression function or lo
 g density a priori as\nthe sample path of a Brownian motion or its primiti
 ve\, or some\nstationary process.  Viewing this prior model as a formal pr
 ior distribution in a Bayesian\nset-up\, we obtain a posterior distributio
 n in the usual way\,\nwhich\, given the observations\, is a probability di
 stribution on a function space.\n\nWe study this posterior distribution un
 der the assumption that the\ndata is generated according to some given tru
 e function\, and are\ninterested in whether the posterior contracts to the
  true function if\nthe informativeness in the data increases indefinitely\
 , and at what\nspeed.\n\nFor Gaussian process priors this rate of contract
 ion rate can be described\nin terms of the small ball probability of the G
 aussian process and\nthe position of the true parameter relative to\nits r
 eproducing kernel Hilbert space. Typically the prior has\na strong influen
 ce on the contraction rate. This dependence can be\nalleviated by scaling 
 the sample paths. For instance\,\nan infinitely smooth\, stationary Gaussi
 an process\nscaled by an inverse Gamma variable yields\na prior distributi
 on on functions such that the posterior\ndistribution adapts to the unknow
 n smoothness\nof the true parameter\, in the sense that contraction takes\
 nplace at the minimax rate for the true smoothness.\n
LOCATION:MR5\, CMS\, Wilberforce Road\, Cambridge\, CB3 0WB
END:VEVENT
END:VCALENDAR
