BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Some lessons learned in Multimodal Representations and Transfer - 
 Pranava Madhyastha\, Imperial College London
DTSTART:20181012T110000Z
DTEND:20181012T120000Z
UID:TALK111733@talks.cam.ac.uk
CONTACT:Andrew Caines
DESCRIPTION:Recent approaches that use transfer learning have had interest
 ing contributions in Visual Question Answering\, Image Captioning among ot
 her applications. While models have shown interesting results\, recent int
 erest in understanding deep learning have exposed some of the glaring weak
 nesses of the models. In this talk I will discuss three of my recent resea
 rch directions that investigate the transfer learning framework in the con
 text of Vision to Language tasks. First\, I will discuss whether the curre
 nt approaches to Multimodal Language models are sufficient and discuss int
 eresting results that indicate the distributional properties of the models
 . Second\, I will discuss whether the training data for these models are s
 ufficient for inferring the quality of the models. Lastly\, I will quickly
  discuss a recent proposal that investigates a method to quantitatively in
 vestigate the performance. All three works are empirical analyses but at t
 he same time interesting and thought provoking.
LOCATION:FW26\, Computer Laboratory
END:VEVENT
END:VCALENDAR
