BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Gradient-based Hyperparameter Optimisation - Ross Clarke (Universi
 ty of Cambridge)
DTSTART:20201111T110500Z
DTEND:20201111T123000Z
UID:TALK153889@talks.cam.ac.uk
CONTACT:Elre Oldewage
DESCRIPTION:We know the ultimate performance of our machine learning syste
 ms depends crucially on our training hyperparameters\, motivating work to 
 automate their selection. But traditional methods require many repeated ru
 ns and scale poorly to large numbers of hyperparameters (including variabl
 e schedules and per-parameter optimiser settings). New gradient-based meth
 ods aim to address these issues by updating hyperparameters during trainin
 g itself\, providing a more direct update signal than the black-box models
  used previously. In this talk\, we explore the evolution of these methods
  and some recent developments\, culminating in algorithms which can feasib
 ly optimise millions of hyperparameters in parallel with network weights.\
 n\nOptional Reading:\n\nOur exposition will not assume any pre-reading. Ho
 wever\, the following recent paper closely matches our notation and draws 
 on much of the relevant literature\, so would provide useful familiarisati
 on for anybody who wishes:\n\nJonathan Lorraine\, Paul Vicol\, David Duven
 aud\,\n_Optimizing Millions of Hyperparameters by Implicit Differentiation
 _\,\nAISTATS 2020\n\nhttp://proceedings.mlr.press/v108/lorraine20a.html (a
 void the out-of-date ArXiv version)
LOCATION:https://eng-cam.zoom.us/j/86068703738?pwd=YnFleXFQOE1qR1h6Vmtwbno
 0LzFHdz09
END:VEVENT
END:VCALENDAR
