BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Cambridge ELLIS seminar series – Dr Rika Antonova – 29 Feb 202
 4 – 2pm - Speaker to be confirmed
DTSTART:20240229T140000Z
DTEND:20240229T150000Z
UID:TALK212014@talks.cam.ac.uk
CONTACT:Catarina
DESCRIPTION:Autonomous exploration and data-efficient learning are importa
 nt ingredients for helping machine learning handle the complexity and vari
 ety of real-world interactions. In this talk\, I will describe methods tha
 t provide these ingredients and serve as building blocks for enabling self
 -sufficient robot learning.\nFirst\, I will outline a family of methods th
 at facilitate active global exploration. Specifically\, they enable ultra 
 data-efficient Bayesian optimization in reality by leveraging experience f
 rom simulation to shape the space of decisions. In robotics\, these method
 s enable success with a budget of only 10-20 real robot trials for a range
  of tasks: bipedal and hexapod walking\, task-oriented grasping\, and nonp
 rehensile manipulation.\nNext\, I will describe how to bring simulations c
 loser to reality. This is especially important for scenarios with highly d
 eformable objects\, where simulation parameters influence the dynamics in 
 unintuitive ways. The success here hinges on either finding effective repr
 esentations for the state of deformables or leveraging differentiable simu
 lation and rendering for direct optimization.\nFinally\, I will share the 
 vision of how to combine efficient representations and policy structures t
 o obtain adaptable mobile manipulation that succeeds not only for rigid\, 
 but also for articulated and deformable objects. For this\, our recent wor
 k on generalizing equivariant representations can offer instant generaliza
 tion to changes in object poses and scales. To create a compelling demonst
 ration for these algorithmic advances\, I will share ideas for now to empl
 oy them for solving everyday household tasks\, leveraging a prototype of o
 ur TidyBot system and integrating with large vision-language models.\n
LOCATION:https://cam-ac-uk.zoom.us/j/89787195157?pwd=cXhUNUxnNHNGUUROTUY5U
 Xd5UkNzdz09 
END:VEVENT
END:VCALENDAR
