BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Online Meta-Learning - Massimiliano Pontil\, University College Lo
 ndon
DTSTART:20190214T110000Z
DTEND:20190214T120000Z
UID:TALK120307@talks.cam.ac.uk
CONTACT:Carl Edward Rasmussen
DESCRIPTION:We study the problem in which a series of learning tasks\nare 
 observed sequentially and the goal is to incrementally adapt a\nlearning a
 lgorithm in order to improve its performance on future\ntasks. The tasks a
 re sampled from a meta-distribution\, called the\nenvironment in the learn
 ing-to-learn literature (Baxter 2000). We\nfocus on linear learning algori
 thms based on regularized empirical\nrisk minimization such as ridge regre
 ssion or support vector machines.\nThe algorithms are parametrized by eith
 er a (representation) matrix\napplied to the raw inputs or by a bias vecto
 r used in the regularizer.\nIn both settings\, we develop a computational 
 efficient meta-algorithm\nto incrementally adapt the learning algorithm af
 ter a task dataset is\nobserved. The meta-algorithm performs stochastic gr
 adient descent on a\nproxy objective of the risk of the learning algorithm
 . We derive\nbounds on the performance of the meta-algorithm\, measured by
  the\naverage risk of the learning algorithm on random tasks from the\nenv
 ironment. Our analysis leverages ideas from multitask learning and\nlearni
 ng-to-learn with tools from online learning and stochastic\noptimization. 
 In the last part of the talk\, we discuss extensions of\nthe framework to 
 nonlinear models such a deep neural nets and draw\nlinks between meta-lear
 ning\, bilevel optimization and gradient-based\nhyperparameter optimizatio
 n.\n
LOCATION:Engineering Department\, CBL Room BE-438.
END:VEVENT
END:VCALENDAR
