BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:A quick way to learn a mixture of exponentially many linear models
  - Geoffrey Hinton\, Canadian Institute for Advanced Research &amp\; Unive
 rsity of Toronto
DTSTART:20090615T140000Z
DTEND:20090615T150000Z
UID:TALK17338@talks.cam.ac.uk
CONTACT:David MacKay
DESCRIPTION:Mixtures of linear models can be used to model data that lies 
 on or\nnear a smooth non-linear manifold.\nA proper Bayesian treatment can
  be applied to toy data to determine\nthe number of  models in the mixture
  and the dimensionality of each\nlinear model but this neurally uninspired
  approach completely misses\nthe main problem: Real data with many degrees
  of freedom in the\nmanifold requires a mixture with an exponential number
  of components.\nIt is quite easy to fit mixtures of 2^1000 linear models 
 by using a\nfew tricks: First\, each linear model selects from a pool of s
 hared\nfactors using the selection rule that factors with negative values 
 are\nignored. Second\, undirected linear models are used to simplify\ninfe
 rence and the models are trained by matching pairwise statistics.\nThird\,
  Poisson noise is used to implement L1 regularization of  the\nactivities 
 of the factors. The factors are then threshold linear\nneurons with Poisso
 n noise and their positive integer activities are\nvery sparse. Preliminar
 y results suggest that these exponentially\nlarge mixtures work very well 
 as modules for greedy\, layer-by-layer\nlearning of deep networks. Even wi
 th one eye closed\, they outperform\nSupport Vector machines  for recogniz
 ing 3-D images of objects from\nthe NORB database.\n
LOCATION:TCM Seminar Room\, Cavendish Laboratory\, Department of Physics
END:VEVENT
END:VCALENDAR
