BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Efficient priors for self-supervised learning: application and the
 ories   - Yu Wang\, JD AI Research
DTSTART:20221005T120000Z
DTEND:20221005T130000Z
UID:TALK182282@talks.cam.ac.uk
CONTACT:Yuan Huang
DESCRIPTION:Remarkable progress of self-supervised learning has been takin
 g place in the past two years across various domains. The goal of SSL meth
 od is to learn useful semantic features without human annotations. In abse
 nce of human defined labels\, we expect the deep network to learn richer f
 eature structure explained by the data itself instead of being constrained
  by human knowledge. Nevertheless\, self-supervised learning still hinges 
 on strong prior knowledge or human-defined pretext task to effectively pre
 train the network. These prior knowledges can impose some certain form of 
 consistency between different views of image\, or be based on some pre-def
 ined pretext task such as rotation prediction.  This talk will cover our r
 ecent progress and new findings in terms of constructing useful priors for
  self-supervised learning (respectively published in T-PAMI and NeurIPS 20
 21)\, both from perspective of theories and practical applications. We wil
 l also introduce the SOTA mainstream self-supervised learning frameworks a
 nd the useful pretexts widely used in this field.\n\n\n*Join Zoom Link:*\n
 \nhttps://maths-cam-ac-uk.zoom.us/j/93331132587?pwd=MlpReFY3MVpyVThlSi85Tm
 UzdTJxdz09 Meeting ID: 933 3113 2587 Passcode: 144696
LOCATION:Virtual (see abstract for Zoom link)
END:VEVENT
END:VCALENDAR
