BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Representing conditional independence in probabilistic programming
  - Daniel Roy\, Engineering Department\, University of Cambridge
DTSTART:20140124T160000Z
DTEND:20140124T170000Z
UID:TALK48718@talks.cam.ac.uk
CONTACT:Jonathan Hayman
DESCRIPTION:Programs can be used to give compact representations of distri
 butions: in order to represent a distribution\, one simply gives a program
  that would generate an exact sample were the random number generator to p
 roduce true randomness. This approach to representing distributions by pro
 babilistic programs works not only for simple distributions on numbers lik
 e Poissons\, Gaussians\, etc.\, but also for more exotic distributions on\
 , e.g.\, phrases in natural language\, friendships in a social network\, r
 endered 2D images of 3D scenes\, and climate sensor measurements.\n\n\nPro
 babilistic programming systems support statistical inference on models def
 ined by probabilistic programs. By constraining some variables of a progra
 m (e.g.\, simulated sensor readings in some climate model) and studying th
 e conditional distribution of other variables (e.g.\, the parameters of th
 e climate model)\, we can identify plausible variable settings that agree 
 with the constraints. Conditional inferences like this would allow us to\,
  e.g.\, build predictive text systems for mobile phones\, identify missing
  links in a social network\, guess the 3D shape of an object from only a p
 hotograph\, or study the underlying mechanisms driving observed climate me
 asurements.\n\n\nProbabilistic programming systems for machine learning an
 d statistics are still in their infancy\, and there are many interesting t
 heoretical and applied problems yet to be tackled.  One of the most challe
 nging problems is understanding when such systems can be efficient.  The e
 fficiency of general-purpose probabilistic inference algorithms\, especial
 ly probabilistic programming systems\, often depends on the abundance of c
 onditional independence in the underlying probabilistic model.  Informally
 \, conditional independence among random variables in a model can allow a 
 large inference problem to be broken down into smaller pieces that can be 
 solved individually.  But even if a probabilistic model exhibits a high de
 gree of conditional independence\, the chosen probabilistic programming la
 nguage might make it difficult or impossible to expose that conditional in
 dependence to the algorithm\, all but ruling out the hope for an efficient
  algorithm.  \n \nA challenging and important domain for probabilistic pro
 gramming systems is nonparametric Bayesian statistics\, and in the course 
 of studying and implementing state-of-the-art models\, we have identified 
 several situations where the probabilistic programming language Church see
 ms to be unable to represent the conditional independencies present in a m
 odel.  But the problem is not specific to Church\, because we show that it
  is a general problem for any system that represents distributions with sa
 mpling programs that must halt with probability one.  We propose a new def
 inition for “probabilistic program” where the program must sample corr
 ectly only with arbitrarily high probability\, and show that this relaxati
 on allows us to represent conditional independencies that were previously 
 unrepresentable.\n \nJoint work with Nate Ackerman\, Jeremy Avigad\, Camer
 on Freer and Jason Rute.
LOCATION:Room FW26\, Computer Laboratory\, William Gates Building
END:VEVENT
END:VCALENDAR
