BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Learning language by observing the world and learning about the wo
 rld from language - Aida Nematzadeh\, DeepMind
DTSTART:20210318T110000Z
DTEND:20210318T120000Z
UID:TALK158242@talks.cam.ac.uk
CONTACT:Marinela Parovic
DESCRIPTION:Children learn about the visual world from implicit\nsupervisi
 on that language provides. Most children learn their\nlanguage\, at least 
 to some extent\, by observing the world. Recently released datasets of ins
 tructional videos are interesting as they can be considered a rough approx
 imation of a child’s visual and linguistic experience -- in these videos
 \, the narrator performs a high-level task (e.g.\, cooking pasta) while de
 scribing the steps involved in that task (e.g.\, boiling water). Moreover\
 , these datasets pose challenges similar to those children need to address
 \; for example\, identifying relevant activities to the task (e.g.\, boili
 ng water) and ignoring the rest (e.g.\, shaking head). I will present two 
 projects where we study the interaction of visual and linguistic signals i
 n these videos: (1) We show that using language and the structure of tasks
  is important in discovering action boundaries. (2) I will discuss how vis
 ual signal improves the quality of unsupervised word translation\, especia
 lly for dissimilar languages\, and where we do not have access to large co
 rpora.
LOCATION:https://cam-ac-uk.zoom.us/j/97599459216?pwd=QTRsOWZCOXRTREVnbTJBd
 XVpOXFvdz09
END:VEVENT
END:VCALENDAR
