BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Deep consequences: Why syntax (as we know it) isn't a thing\, and 
 other (shocking?) conclusions from modelling language with neural nets. - 
 Felix Hill\, Computer Laboratory
DTSTART:20150529T110000Z
DTEND:20150529T120000Z
UID:TALK58498@talks.cam.ac.uk
CONTACT:Tamara Polajnar
DESCRIPTION:With the development of 'deeper' models of language processing
 \, we can start to infer (in an more empirically sound way) the true princ
 iples\, factors or structures that underline language. This is because\, u
 nlike many other approaches in NLP\, deep language models (loosely) reflec
 t the true situation in which humans learn language. Neural language model
 s learn the meaning of words and phrases concurrently with how best to gro
 up and combine these meanings\, and they are trained to use this knowledge
  to do something human language users easily do. Such models beat establis
 hed alternatives at various tasks that humans find easy but machines tradi
 tionally find hard. In this talk\, I present the results of recent experim
 ents using deep neural nets to model language. This includes the latest re
 sults from a recent paper Learning to Understand Phrases by Embedding the 
 Dictionary\, in which we apply a recurrent net with long-short-term-memory
  to a general-knowledge question-answering task. I conclude by discussing 
 the potential implications of all of this for both language science and en
 gineering.
LOCATION:FW26\, Computer Laboratory
END:VEVENT
END:VCALENDAR
