BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Nonlinear ICA using temporal structure: a principled framework for
  unsupervised deep learning - Prof. Aapo Hyvarinen
DTSTART:20171019T100000Z
DTEND:20171019T110000Z
UID:TALK78201@talks.cam.ac.uk
CONTACT:Dr R.E. Turner
DESCRIPTION:Unsupervised learning\, in particular learning general nonline
 ar representations\, is one of the deepest problems in machine learning. E
 stimating latent quantities in a generative model provides a principled fr
 amework\, and has been successfully used in the linear case\, e.g. with in
 dependent component analysis (ICA) and sparse coding. However\, extending 
 ICA to the nonlinear case has proven to be extremely difficult: A straight
 -forward extension is unidentifiable\, i.e. it is not possible to recover 
 those latent components that actually generated the data. Here\, we show t
 hat this problem can be solved by using temporal structure. We formulate t
 wo generative models in which the data is an arbitrary but invertible nonl
 inear transformation of time series (components) which are statistically i
 ndependent of each other. Drawing from the theory of linear ICA\, we formu
 late two distinct classes of temporal structure of the components which en
 able identification\, i.e. recovery of the original independent components
 . We show that in both cases\, the actual learning can be performed by ord
 inary neural network training where only the input is defined in an unconv
 entional manner\, making software implementations trivial. We can rigorous
 ly prove that after such training\, the units in the last hidden layer wil
 l give the original independent components. [With Hiroshi Morioka\, publis
 hed at NIPS2016 and AISTATS2017.] 
LOCATION:CBL Room BE-438\, Department of Engineering
END:VEVENT
END:VCALENDAR
