BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Latent Variable Models for Text Generation - Xiaoyu Shen\, Saarlan
 d University / Max Planck Institute
DTSTART:20191010T100000Z
DTEND:20191010T110000Z
UID:TALK131887@talks.cam.ac.uk
CONTACT:Edoardo Maria Ponti
DESCRIPTION:Latent variable models provide an effective way to specify\npr
 ior knowledge and uncover the intermediate decision process of natural lan
 guage generation. In this talk\, we will go through two specific applicati
 ons. The first one incorporates latent continuous variables into a dialogu
 e generation model. The latent variable is trained to maximize the mutual 
 information with neighboring utterances. We show the\nlatent variable comp
 onent is able to significantly enhance the connection between the generate
 d response and its surrounding context\, leading to a more engaging human-
 machine conversation. The second one explicitly models the content selecti
 on process with discrete latent variables. By lowering down the training v
 ariance with a variational autoencoder objective\, the model is able to su
 ccessfully decouple content selection from the black-box generation model 
 on both sentence compression and data-to-text tasks\, enabling us to contr
 ol the content selection in an interpretable way.
LOCATION:Board room\, Faculty of English\, 9 West Rd (Sidgwick Site)
END:VEVENT
END:VCALENDAR
