BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Deep consequences: Why syntax (as we know it) isn't a thing\, and 
 other (shocking?) conclusions from modelling language with neural nets - F
 elix Hill\, Computer Laboratory
DTSTART:20150306T120000Z
DTEND:20150306T130000Z
UID:TALK57812@talks.cam.ac.uk
CONTACT:Tamara Polajnar
DESCRIPTION:With the development of 'deeper' models of language processing
 \, we can start to infer (in an more empirically sound way) the true princ
 iples\, factors or structures that underline language. This is because\, u
 nlike many other approaches in NLP\, deep language models (loosely) reflec
 t the true situation in which humans learn language. Neural language model
 s learn the meaning of words and phrases concurrently with how best to gro
 up and combine these meanings\, and they are trained to use this knowledge
  to do something human language users easily do. Such models beat establis
 hed alternatives at various tasks that humans find easy but machines tradi
 tionally find hard. In this talk\, I present the results of recent experim
 ents using deep neural nets to model language\, and discuss the potential 
 implications for both language science and engineering.\n
LOCATION:FW26\, Computer Laboratory
END:VEVENT
END:VCALENDAR
