BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Neural Variational Inference for NLP - Yishu Miao\, University of 
 Oxford
DTSTART:20170310T120000Z
DTEND:20170310T130000Z
UID:TALK69975@talks.cam.ac.uk
CONTACT:Kris Cao
DESCRIPTION:Recent advances in neural variational inference have spawned a
  renaissance in deep latent variable models. While traditional variational
  methods derive an analytic approximation for the intractable distribution
 s over latent variables\, here we discuss about introducing an inference n
 etwork conditioned on the discrete text input to provide the variational d
 istribution in the latent variable models for NLP. For models with continu
 ous latent variables associated with particular distributions\, such as Ga
 ussians\, there exist reparameterisations (Kingma & Welling\, 2014\; Rezen
 de et al.\, 2014) of the distribution permitting unbiased and low-variance
  estimates of the gradients with respect to the parameters of the inferenc
 e network.  For models with discrete latent variables\, Monte-Carlo estima
 tes of the gradient must be employed. Generally\, algorithms such as REINF
 ORCE have been used effectively to decrease variance and improve learning 
 (Mnih & Gregor\, 2014\; Mnih et al.\, 2014). In this talk\, I will talk ab
 out the latent variable models applied for NLP with continuous or discrete
  latent variables\, and their corresponding neural variational inference m
 ethods.
LOCATION:FW26\, Computer Laboratory
END:VEVENT
END:VCALENDAR
