BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Learning rates in Bayesian nonparametrics: Gaussian process priors
  - Aad van der Vaart (Vrije Univ. Amsterdam)
DTSTART:20091127T163000Z
DTEND:20091127T173000Z
UID:TALK21245@talks.cam.ac.uk
CONTACT:Berestycki
DESCRIPTION:Joint with Statistics Series\n\nThe sample path of a Gaussian 
 process can be used as a prior model\nfor an unknown function that we wish
  to estimate. For instance\, one might\nmodel a regression function or log
  density a priori as\nthe sample path of a Brownian motion or its primitiv
 e\, or some\nstationary process.  Viewing this prior model as a formal pri
 or distribution in a Bayesian\nset-up\, we obtain a posterior distribution
  in the usual way\,\nwhich\, given the observations\, is a probability dis
 tribution on a function space.\n\nWe study this posterior distribution und
 er the assumption that the\ndata is generated according to some given true
  function\, and are\ninterested in whether the posterior contracts to the 
 true function if\nthe informativeness in the data increases indefinitely\,
  and at what\nspeed.\n\nFor Gaussian process priors this rate of contracti
 on rate can be described\nin terms of the small ball probability of the Ga
 ussian process and\nthe position of the true parameter relative to\nits re
 producing kernel Hilbert space. Typically the prior has\na strong influenc
 e on the contraction rate. This dependence can be\nalleviated by scaling t
 he sample paths. For instance\,\nan infinitely smooth\, stationary Gaussia
 n process\nscaled by an inverse Gamma variable yields\na prior distributio
 n on functions such that the posterior\ndistribution adapts to the unknown
  smoothness\nof the true parameter\, in the sense that contraction takes\n
 place at the minimax rate for the true smoothness.\n
LOCATION:MR5\, CMS\, Wilberforce Road\, Cambridge\, CB3 0WB
END:VEVENT
END:VCALENDAR
