BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Provable representation learning in deep learning - Jason Lee (Pri
 nceton University)
DTSTART:20201113T160000Z
DTEND:20201113T170000Z
UID:TALK152803@talks.cam.ac.uk
CONTACT:Dr Sergio Bacallado
DESCRIPTION:Deep representation learning seeks to learn a data representat
 ion that transfers to downstream tasks. In this talk\, we study two forms 
 of representation learning: supervised pre-training and self-supervised le
 arning.\n\nSupervised pre-training uses a large labeled source dataset to 
 learn a representation\, then trains a classifier on top of the representa
 tion. We prove that supervised pre-training can pool the data from all sou
 rce tasks to learn a good representation which transfers to downstream tas
 ks with few labeled examples.\n\nSelf-supervised learning creates auxiliar
 y pretext tasks that do not require labeled data to learn representations.
  These pretext tasks are created solely using input features\, such as pre
 dicting a missing image patch\, recovering the colour channels of an image
 \, or predicting missing words. Surprisingly\, predicting this known infor
 mation helps in learning a representation effective for downstream tasks. 
 We prove that under a conditional independence assumption\, self-supervise
 d learning provably\nlearns representations.\n
LOCATION: https://maths-cam-ac-uk.zoom.us/j/92821218455?pwd=aHFOZWw5bzVReU
 NYR2d5OWc1Tk15Zz09
END:VEVENT
END:VCALENDAR
