BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Zero-shot Language Learning through Bayesian Neural Models - Edoar
 do Maria Ponti (University of Cambridge)
DTSTART:20200207T120000Z
DTEND:20200207T130000Z
UID:TALK136129@talks.cam.ac.uk
CONTACT:James Thorne
DESCRIPTION:A key feature of general linguistic intelligence is the abilit
 y to generalise to new domains with sample efficiency\, i.e. based on zero
  or few examples. This goal is of great practical importance\, as most com
 binations of tasks and languages lack in-domain examples for supervised tr
 aining. A possible solution consists in biasing the induction of neural mo
 dels towards unseen languages by constructing an informative prior over pa
 rameters (imbued with universal linguistic knowledge) from seen languages.
  Subsequently\, MAP inference allows for learning efficiently from few in-
 domain examples by leveraging the prior information. Another solution reso
 rts to estimating parameters for unseen task-language combinations based o
 n seen combinations. In particular\, the space of neural parameters can be
  factorized into its constituent latent variables\, one for each task and 
 language. The posteriors over these latent variables can be learned throug
 h stochastic Variational Inference. In this talk\, I will argue that these
  approaches yield comparable or better performance than state-of-the-art z
 ero-shot transfer methods for character-level language modelling\, POS tag
 ging and Named Entity Recognition over a wide and typologically diverse sa
 mple of languages. What is more\, the Bayesian treatment of the inference 
 problem allows for quantifying the uncertainty of each prediction\, which 
 is especially valuable in settings characterised by distribution shifts su
 ch as zero-shot transfer.
LOCATION:FW26\, Computer Laboratory
END:VEVENT
END:VCALENDAR
