Recurrent Continuous Translation Models
- đ¤ Speaker: Nal Kalchbrenner (University of Oxford)
- đ Date & Time: Monday 07 October 2013, 12:00 - 13:00
- đ Venue: Department of Engineering - LR12
Abstract
Deep learning methods are well-suited for constructing distributed, continuous representations for linguistic units ranging from characters to sentences. These learnt representations come with an inherent, task-dependent notion of similarity that allows the models to overcome sparsity issues and to generalise well beyond the training domain. In this talk we extend these methods to the problem of machine translation and introduce a class of probabilistic translation models (RCTMs) that rely purely on continuous representations of the source and target sentences. We explore several model architectures and we see that the models obtain translation perplexities that are significantly lower than those of state-of-the-art alignment-based translation models. We also investigate the models’ ability to generate translations directly and solely from the underlying continuous space.
Bio
Nal is a second-year PhD student in the Computational Linguistics and Quantum groups at Oxford. Before joining Oxford, he studied CS, maths and logic at the ILLC and at Stanford. He is a recipient of the Clarendon fellowship.
Series This talk is part of the CUED Speech Group Seminars series.
Included in Lists
- Cambridge Forum of Science and Humanities
- Cambridge Language Sciences
- Cambridge talks
- Chris Davis' list
- CUED Speech Group Seminars
- Department of Engineering - LR12
- Guy Emerson's list
- Information Engineering Division seminar list
- PhD related
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)

Nal Kalchbrenner (University of Oxford)
Monday 07 October 2013, 12:00-13:00