Variational Smoothing in Recurrent Neural Network Language Models
- đ¤ Speaker: Dr. Lingpeng Kong (DeepMind)
- đ Date & Time: Thursday 27 February 2020, 11:00 - 12:00
- đ Venue: Board room, Faculty of English, 9 West Rd (Sidgwick Site)
Abstract
In this talk, we present a new theoretical perspective of data noising in recurrent neural network language models (Xie et al., 2017). We show that each variant of data noising is an instance of Bayesian recurrent neural networks with a particular variational distribution (i.e., a mixture of Gaussians whose weights depend on statistics derived from the corpus such as the unigram distribution). We use this insight to propose a more principled method to apply at prediction time and propose natural extensions to data noising under the variational framework. In particular, we propose variational smoothing with tied input and output embedding matrices and an element-wise variational smoothing method. We empirically verify our analysis on two bench-mark language modeling datasets and demonstrate performance improvements over existing data noising methods.
Series This talk is part of the Language Technology Lab Seminars series.
Included in Lists
- bld31
- Board room, Faculty of English, 9 West Rd (Sidgwick Site)
- Cambridge Centre for Data-Driven Discovery (C2D3)
- Cambridge Forum of Science and Humanities
- Cambridge Language Sciences
- Cambridge talks
- Chris Davis' list
- Guy Emerson's list
- Interested Talks
- Language Sciences for Graduate Students
- Language Technology Lab Seminars
- ndk22's list
- ob366-ai4er
- rp587
- Simon Baker's List
- Trust & Technology Initiative - interesting events
- yk449
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)


Thursday 27 February 2020, 11:00-12:00