BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:How to Pay Attention: Learning to Transfer Knowledge between Sente
 nces and Tokens - Marek Rei ( University of Cambridge)
DTSTART:20190509T100000Z
DTEND:20190509T110000Z
UID:TALK124822@talks.cam.ac.uk
CONTACT:Edoardo Maria Ponti
DESCRIPTION:Self-attention architectures allow models to dynamically decid
 e which areas of the input should receive more focus. During the construct
 ion of text representations\, attention weights also provide a way of quan
 tifying the importance of different input areas. In this talk\, we investi
 gate how attention mechanisms can be turned into sequence labelers\, openi
 ng up some new and interesting applications. These networks learn to predi
 ct labels for individual tokens\, based only on sentence-level supervision
 \, even without having seen any examples of sequence labeling. In addition
 \, optimizing on the token level explicitly teaches the model where it sho
 uld be focusing\, leading to improvements in text classification. We will 
 also discuss experiments with learning directly from the human cognitive s
 ignal\, guiding the models to internally behave more like their users. The
  resulting architectures for text classification and sequence labeling are
  more accurate\, more interpretable and make decisions in more predictable
  ways.
LOCATION:Board room\, Faculty of English\, 9 West Rd (Sidgwick Site)
END:VEVENT
END:VCALENDAR
