Latent Variable Models for Text Generation
- đ¤ Speaker: Xiaoyu Shen, Saarland University / Max Planck Institute
- đ Date & Time: Thursday 10 October 2019, 11:00 - 12:00
- đ Venue: Board room, Faculty of English, 9 West Rd (Sidgwick Site)
Abstract
Latent variable models provide an effective way to specify prior knowledge and uncover the intermediate decision process of natural language generation. In this talk, we will go through two specific applications. The first one incorporates latent continuous variables into a dialogue generation model. The latent variable is trained to maximize the mutual information with neighboring utterances. We show the latent variable component is able to significantly enhance the connection between the generated response and its surrounding context, leading to a more engaging human-machine conversation. The second one explicitly models the content selection process with discrete latent variables. By lowering down the training variance with a variational autoencoder objective, the model is able to successfully decouple content selection from the black-box generation model on both sentence compression and data-to-text tasks, enabling us to control the content selection in an interpretable way.
Series This talk is part of the Language Technology Lab Seminars series.
Included in Lists
- bld31
- Board room, Faculty of English, 9 West Rd (Sidgwick Site)
- Cambridge Centre for Data-Driven Discovery (C2D3)
- Cambridge Forum of Science and Humanities
- Cambridge Language Sciences
- Cambridge talks
- Chris Davis' list
- Guy Emerson's list
- Interested Talks
- Language Sciences for Graduate Students
- Language Technology Lab Seminars
- ndk22's list
- ob366-ai4er
- rp587
- Simon Baker's List
- Trust & Technology Initiative - interesting events
- yk449
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)

Xiaoyu Shen, Saarland University / Max Planck Institute
Thursday 10 October 2019, 11:00-12:00