BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:A Mutual Information Maximization Perspective of Language Represen
 tation Learning - Lingpeng Kong (DeepMind)
DTSTART:20191101T120000Z
DTEND:20191101T130000Z
UID:TALK128497@talks.cam.ac.uk
CONTACT:James Thorne
DESCRIPTION:In this talk\, we show state-of-the-art word representation le
 arning methods maximize an objective function that is a lower bound on the
  mutual information between different parts of a word sequence (i.e.\, a s
 entence). Our formulation provides an alternative perspective that unifies
  classical word embedding models (e.g.\, Skip-gram) and modern contextual 
 embeddings (e.g.\, BERT\, XLNet). In addition to enhancing our theoretical
  understanding of these methods\, our derivation leads to a principled fra
 mework that can be used to construct new self-supervised tasks. We provide
  an example by drawing inspirations from related methods based on mutual i
 nformation maximization that have been successful in computer vision\, and
  introduce a simple self-supervised objective that maximizes the mutual in
 formation between a global sentence representation and n-grams in the sent
 ence. Our analysis offers a holistic view of representation learning metho
 ds to transfer knowledge and translate progress across multiple domains (e
 .g.\, natural language processing\, computer vision\, audio processing).
LOCATION:FW26\, Computer Laboratory
END:VEVENT
END:VCALENDAR
