BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Grounded language learning in simulated worlds - Felix Hill\,  Dee
 pMind
DTSTART:20171027T110000Z
DTEND:20171027T120000Z
UID:TALK87631@talks.cam.ac.uk
CONTACT:Anita Verő
DESCRIPTION:Developing systems that can execute symbolic\, language-like i
 nstructions in the physical world is a long-standing challenge for Artific
 ial Intelligence. Previous attempts to replicate human-like grounded langu
 age understanding involved hard-coding linguistic and physical principles\
 , which is notoriously laborious and difficult to scale. Here we show that
  a simple neural-network based agent without any hard-coded knowledge can 
 exploit general-purpose learning algorithms to infer the meaning of sequen
 tial symbolic instructions as they pertain to a simulated 3D world.\n\nBeg
 inning with no prior knowledge\, the agents learn the meaning of concrete 
 nouns\, adjectives\, more abstract relational predicates and longer\, orde
 r-dependent\, sequences of symbols. The agent naturally generalises predic
 ates to unfamiliar objects\, and can interpret word combinations (phrases)
  that it has never seen before. Moreover\, while its initial learning is s
 low\, the speed at which it acquires new words accelerates as a function o
 f how much it already knows. These observations suggest that the approach 
 may ultimately scale to a wider range of natural language\, which may brin
 g us towards machines capable of learning language via interaction with hu
 man users in the real world.\n\nThe techniques applied in this work will b
 e covered in the course Deep Learning for NLP taught next term in the CL. 
 https://www.cl.cam.ac.uk/teaching/1718/R228/.\n\nBio: Felix is a Research 
 Scientist at Deepmind. He did his PhD at the University of Cambridge with 
 Anna Korhonen\, working on unsupervised language and representation learni
 ng with neural nets. As well as Anna\, he collaborated with (and learned a
  lot from) Yoshua Bengio\, Kyunghyun Cho and Jason Weston. As well as deve
 loping computational models that can understand language\, he is intereste
 d in using models to better understand how people understand language\, an
 d is currently doing both at Deepmind.
LOCATION:FW26\, Computer Laboratory
END:VEVENT
END:VCALENDAR
