BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Autonomous learning of multimodal internal models for robots using
  multiple sources of information  - Martina Zambelli\, Deepmind
DTSTART:20180308T140000Z
DTEND:20180308T150000Z
UID:TALK102406@talks.cam.ac.uk
CONTACT:Alberto Padoan
DESCRIPTION:Robots can learn new skills by autonomously acquiring internal
  models to use for action prediction\, planning and control. Humans are a 
 successful example of autonomous learning: internal models are developed t
 hrough a learning process\, which starts in the first months of infants’
  life\, based on experience and exploration. Similarly\, through autonomou
 s exploration a robot can bootstrap internal models of its own sensorimoto
 r system that enable it to predict the consequences of its actions (forwar
 d models) or the production of new actions to reach target states (inverse
  models). The use of multiple sources of information can benefit such auto
 nomous learning process. I will first introduce an ensemble learning metho
 d that combines multiple prediction models to build forward models. Then\,
  I will illustrate how the use of multiple sensory modalities (e.g. vision
 \, touch\, proprioception) plays a fundamental role in learning and perfor
 ming multimodal tasks (such as playing a piano keyboard). Finally\, I will
  present a multimodal deep variational auto-encoder architecture that allo
 ws a humanoid iCub robot to predict and imitate other agents' actions\, ba
 sed only on its own learned internal model.\n
LOCATION:Cambridge University Engineering Department\, Lecture Room 12
END:VEVENT
END:VCALENDAR
