BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Certain about uncertainty? Latent representations of VAEs optimize
 d for visual tasks. - Josefina Catoni\, Universidad Nacional del Litoral
DTSTART:20250212T150000Z
DTEND:20250212T163000Z
UID:TALK228034@talks.cam.ac.uk
CONTACT:Daniel Kornai
DESCRIPTION:Deep Learning methods are increasingly becoming instrumental a
 s modeling tools in Computational Neuroscience\, employing optimality prin
 ciples to build bridges between neural responses and perception or behavio
 r. Deep Generative Models (DGMs) can learn flexible latent variable repres
 entations of images while avoiding intractable computations\, common in Ba
 yesian inference. However\, investigating the properties of inference in V
 ariational Autoencoders (VAEs)\, a major class of DGMs\, reveals severe pr
 oblems in their uncertainty representations. Here we draw inspiration from
  classical computer vision to introduce an inductive bias into the VAE by 
 incorporating a global explaining-away latent variable\, which remedies de
 fective inference in VAEs. Unlike standard VAEs\, the Explaining-Away VAE 
 (EA-VAE) provides uncertainty estimates that align with normative requirem
 ents across a wide spectrum of perceptual tasks\, including image corrupti
 on\, interpolation\, and out-of-distribution detection. We find that resto
 red inference capabilities are delivered by developing a motif in the infe
 rence network (the encoder) which is widespread in biological neural netwo
 rks: divisive normalization. Our results establish EA-VAEs as reliable too
 ls to perform inference under deep generative models with appropriate esti
 mates of uncertainty.
LOCATION:CBL Seminar Room\, Engineering Department\, 4th floor Baker build
 ing
END:VEVENT
END:VCALENDAR
