BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY: Model selection in a large compositional space  - Roger Grosse (M
 IT)
DTSTART:20121011T133000Z
DTEND:20121011T150000Z
UID:TALK40973@talks.cam.ac.uk
CONTACT:Konstantina Palla
DESCRIPTION:We often build complex probabilistic models by "composing" sim
 pler\nmodels -- using one model to generate the latent variables for anoth
 er\nmodel. This allows us to express complex distributions over the\nobser
 ved data and to share statistical structure between different\nparts of a 
 model. I'll present a space of matrix decomposition models\ndefined by the
  composition of a small number of motifs of\nprobabilistic modeling\, such
  as clustering\, low rank factorizations\,\nand binary latent factor model
 s. This compositional structure can be\nrepresented by a context-free gram
 mar whose production rules\ncorrespond to these motifs. By exploiting the 
 structure of this\ngrammar\, we can generically and efficiently infer late
 nt components\nand estimate predictive likelihood for nearly 2500 model st
 ructures\nusing a small toolbox of reusable algorithms. Using a greedy sea
 rch\nover this grammar\, we automatically choose the decomposition structu
 re\nfrom raw data by evaluating only a small fraction of all models. The\n
 proposed method typically finds the correct structure for synthetic\ndata 
 and backs off gracefully to simpler models under heavy noise. It\nlearns s
 ensible structures for datasets as diverse as image patches\,\nmotion capt
 ure\, 20 Questions\, and U.S. Senate votes\, all using exactly\nthe same c
 ode. I'll briefly describe my ongoing work on estimating\nmarginal likelih
 ood in this space of models and how I think this work\nrelates to composit
 ional models more generally.
LOCATION:Engineering Department\, CBL Room 438
END:VEVENT
END:VCALENDAR
