BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:AI+Pizza April 2018 - Microsft Research Cambridge/Aalto University
DTSTART:20180420T163000Z
DTEND:20180420T180000Z
UID:TALK104470@talks.cam.ac.uk
CONTACT:Microsoft Research Cambridge Talks Admins
DESCRIPTION:*Title* 	Learning ambient magnetic fields for localisation and
  mapping \n*Speaker* 	Arno Solin \n*Host* 	Sebastian Nowozin \n*Event Date
 * 	20/04/2018 5:30PM - 5:45PM \n*Location* 	Auditorium \n \n*Description* 
 	Small disturbances in the Earth ambient magnetic field can be used as fea
 tures in indoor positioning. In this talk I present a recent method for on
 line modelling and mapping ambient magnetic fields by Gaussian processes. 
 The mapping approach extends well to simultaneous localisation and mapping
  (SLAM) by a Rao-Blackwellised particle filter (Sequential Monte Carlo). I
  present examples of the method running on data collected on a smartphone 
 here in Cambridge. (Joint work with Manon Kok and others) \n\n*Title* 	Met
 a Reinforcement Learning with Latent Variable Gaussian Processes \n*Speake
 r* 	Steindor Saemundsson \n*Event Date* 	20/04/2018 5:45PM - 6:00PM \n*Loc
 ation* 	Auditorium \n*Mode* 	Room Only \n*Description* 	Data efficiency\, 
 i.e.\, learning from small data sets\, is critical in many practical appli
 cations where data collection is time consuming or expensive\, e.g.\, robo
 tics\, animal experiments or drug design. Meta learning is one way to incr
 ease the data efficiency of learning algorithms by generalizing learned co
 ncepts from a set of training tasks to unseen\, but related\, tasks. Often
 \, this relationship between tasks is hard coded or relies in some other w
 ay on human expertise. In this paper\, we propose to automatically learn t
 he relationship between tasks using a latent variable model. Our approach 
 finds a variational posterior over tasks and averages over all plausible (
 according to this posterior) tasks when making predictions. We apply this 
 framework within a model-based reinforcement learning setting for learning
  dynamics models and controllers of many related tasks. We apply our frame
 work in a model-based reinforcement learning setting\, and show that our m
 odel effectively generalizes to novel tasks. \n\n\n\n\n\n
LOCATION:Auditorium\, Microsoft Research Ltd\, 21 Station Road\, Cambridge
 \, CB1 2FB
END:VEVENT
END:VCALENDAR
