BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Theory and Practice of Infinitely Wide Neural Networks - Guest Tal
 k - Roman Novak\, Google Brain
DTSTART:20220928T100000Z
DTEND:20220928T113000Z
UID:TALK178784@talks.cam.ac.uk
CONTACT:Elre Oldewage
DESCRIPTION:A common observation that wider (in the number of hidden units
 /channels/attention heads) neural networks perform better motivates studyi
 ng them in the infinite-width limit.\n\nRemarkably\, infinitely wide netwo
 rks can be easily described in closed form as Gaussian processes (GPs)\, a
 t initialization\, during\, and after training—be it gradient-based\, or
  fully Bayesian training. This provides closed-form test set predictions a
 nd uncertainties from an infinitely wide network without ever instantiatin
 g it (!).\n\nThese infinitely wide networks have become powerful models in
  their own right\, establishing several SOTA results\, and are used in app
 lications including hyper-parameter selection\, neural architecture search
 \, meta learning\, active learning\, and dataset distillation.\n\nThe talk
  will provide a high-level overview of our work at Google Brain on infinit
 e-width networks. In the first part I will derive core results\, providing
  intuition for why infinite-width networks are GPs. In the second part I w
 ill discuss challenges and solutions to implementing and scaling up these 
 GPs. In the third part\, I will conclude with example applications made po
 ssible with infinite width networks.\n\nThe talk does not assume familiari
 ty with the topic beyond general ML background.
LOCATION:Cambridge University Engineering Department\, CBL Seminar room BE
 4-38
END:VEVENT
END:VCALENDAR
