BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Failure Modes of Variational Autoencoders and Their Effects on Dow
 nstream Tasks - Yaniv Yacoby\, Harvard University
DTSTART:20210324T110000Z
DTEND:20210324T123000Z
UID:TALK157981@talks.cam.ac.uk
CONTACT:Elre Oldewage
DESCRIPTION:Variational Auto-encoders (VAEs) are deep generative latent va
 riable models that are widely used for a number of downstream tasks -- sem
 i-supervised learning\, learning compressed and disentangled representatio
 ns\, adversarial robustness. VAEs are widely used because they are easy to
  implement and train\; in particular\, the common choice of mean-field Gau
 ssian (MFG) approximate posteriors for VAEs (MFG-VAE) results in an infere
 nce procedure that is straight-forward to implement and stable in training
 . Unfortunately\, a growing body of work has demonstrated that MFG-VAEs su
 ffer from a variety of pathologies\, including learning un-informative lat
 ent codes and un-realistic data distributions. When the data consists of i
 mages or text\, we often rely on "gut checks" to ensure the quality of the
  learned latent representations and generated data is high\, but for numer
 ic data (e.g. medical EHR data)\, we cannot rely on such gut checks. Exist
 ing work lacks a characterization of exactly when these pathologies occur 
 and how they impact down-stream task performance. In this talk\, we will c
 haracterize when VAE training exhibits pathologies (as global optima of th
 e ELBO) and connect these failure modes to undesirable effects on specific
  downstream tasks. 
LOCATION:https://eng-cam.zoom.us/j/86068703738?pwd=YnFleXFQOE1qR1h6Vmtwbno
 0LzFHdz09
END:VEVENT
END:VCALENDAR
