BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Learning to Learn without Gradient Descent by Gradient Descent - Y
 utian Chen\, DeepMind
DTSTART:20171124T110000Z
DTEND:20171124T120000Z
UID:TALK96196@talks.cam.ac.uk
CONTACT:Adrian Weller
DESCRIPTION:Abstract: \nWe learn recurrent neural network optimizers train
 ed on simple synthetic functions by gradient descent. We show that these l
 earned optimizers exhibit a remarkable degree of transfer in that they can
  be used to efficiently optimize a broad range of derivative-free black-bo
 x functions\, including Gaussian process bandits\, simple control objectiv
 es\, global optimization benchmarks and hyper-parameter tuning tasks. Up t
 o the training horizon\, the learned optimizers learn to trade-off explora
 tion and exploitation\, and compare favourably with heavily engineered Bay
 esian optimization packages for hyper-parameter tuning.\n\nBio:\nYutian is
  a senior research scientist at DeepMind. He obtained his PhD in 2013 unde
 r the supervision of Prof. Max Welling at the University of California\, I
 rvine. He worked on efficient training and sampling algorithms for probabi
 listic graphical models. After graduation Yutian moved to the University o
 f Cambridge to do a postdoc with Prof. Zoubin Ghahramani in the Computatio
 nal and Biological Learning Lab\, where he had been working on scaling up 
 Bayesian inference methods for large-scale problems. His current research 
 is focused on using deep learning and reinforcement learning methods to ap
 ply probabilistic models to real-world challenging problems.\n
LOCATION:CBL Seminar Room
END:VEVENT
END:VCALENDAR
