BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Learning rates in Bayesian nonparametrics: Gaussian process priors
  - Aad van der Vaart (Vrije Univ. Amsterdam)
DTSTART:20091125T160000Z
DTEND:20091125T170000Z
UID:TALK21707@talks.cam.ac.uk
CONTACT:HoD Secretary\, DPMMS
DESCRIPTION:The sample path of a Gaussian process can be used as a prior m
 odel for an unknown function that we wish to estimate. For instance\, one 
 might model a regression function or log density a priori as the sample pa
 th of a Brownian motion or its primitive\, or some stationary process. Vie
 wing this prior model as a formal prior distribution in a Bayesian set-up\
 , we obtain a posterior distribution in the usual way\, which\, given the 
 observations\, is a probability distribution on a function space.\n\nWe st
 udy this posterior distribution under the assumption that the data is gene
 rated according to some given true function\, and are interested in whethe
 r the posterior contracts to the true function if the informativeness in t
 he data increases indefinitely\, and at what speed.\n\nFor Gaussian proces
 s priors this rate of contraction rate can be described in terms of the sm
 all ball probability of the Gaussian process and the position of the true 
 parameter relative to its reproducing kernel Hilbert space. Typically the 
 prior has a strong influence on the contraction rate. This dependence can 
 be alleviated by scaling the sample paths. For instance\, an infinitely sm
 ooth\, stationary Gaussian process scaled by an inverse Gamma variable yie
 lds a prior distribution on functions such that the posterior distribution
  adapts to the unknown smoothness of the true parameter\, in the sense tha
 t contraction takes place at the minimax rate for the true smoothness.
LOCATION:MR5\, CMS\, Wilberforce Road\, Cambridge\, CB3 0WB
END:VEVENT
END:VCALENDAR
