BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Do Deep Generative Models Know What They Don't Know? - Eric T Nali
 snick (University of Cambridge)
DTSTART:20190214T110000Z
DTEND:20190214T120000Z
UID:TALK119611@talks.cam.ac.uk
CONTACT:Edoardo Maria Ponti
DESCRIPTION:*Abstract*:\nA neural network deployed in the wild may be aske
 d to make predictions for inputs that were drawn from a different distribu
 tion than that of the training data.  A plethora of work has demonstrated 
 that it is easy to find or synthesize inputs for which a neural network is
  highly confident yet wrong. Generative models are widely viewed to be rob
 ust to \nsuch overconfident mistakes as modeling the density of the input 
 features can be used to detect novel\, out-of-distribution inputs.  In thi
 s talk\, I challenge this assumption\, focusing analysis on flow-based gen
 erative models in particular since they are trained and evaluated via the 
 exact marginal likelihood. We find that the model density cannot distingui
 sh images of common objects such as dogs\, trucks\, and horses (i.e. CIFAR
 -10) from those of house numbers (i.e. SVHN)\, assigning a higher likeliho
 od to the latter when the model is trained on the former. We find such beh
 avior persists even when we restrict the flows to \nconstant-volume transf
 ormations. These admit some theoretical analysis\, and we show that the di
 fference in likelihoods can be explained by the location and variances of 
 the data and the model curvature. Our results suggest caution when using d
 ensity estimates of deep generative models on out-of-distribution inputs.
LOCATION:Faculty of English\, Room SR24
END:VEVENT
END:VCALENDAR
