Deep consequences: Why syntax (as we know it) isn't a thing, and other (shocking?) conclusions from modelling language with neural nets
- ๐ค Speaker: Felix Hill, Computer Laboratory
- ๐ Date & Time: Friday 06 March 2015, 12:00 - 13:00
- ๐ Venue: FW26, Computer Laboratory
Abstract
With the development of ‘deeper’ models of language processing, we can start to infer (in an more empirically sound way) the true principles, factors or structures that underline language. This is because, unlike many other approaches in NLP , deep language models (loosely) reflect the true situation in which humans learn language. Neural language models learn the meaning of words and phrases concurrently with how best to group and combine these meanings, and they are trained to use this knowledge to do something human language users easily do. Such models beat established alternatives at various tasks that humans find easy but machines traditionally find hard. In this talk, I present the results of recent experiments using deep neural nets to model language, and discuss the potential implications for both language science and engineering.
Series This talk is part of the NLIP Seminar Series series.
Included in Lists
This talk is not included in any other list.
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)


Friday 06 March 2015, 12:00-13:00