BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:The Bayesian Learning Rule for Adaptive AI - Emti Khan\, RIKEN cen
 ter for Advanced Intelligence Project
DTSTART:20220316T110000Z
DTEND:20220316T123000Z
UID:TALK165652@talks.cam.ac.uk
CONTACT:Elre Oldewage
DESCRIPTION:Humans and animals have a natural ability to autonomously lear
 n and quickly adapt to their surroundings. How can we design AI systems th
 at do the same? In this talk\, I will present Bayesian principles to bridg
 e such gaps between humans and AI. I will show that a wide-variety of mach
 ine-learning algorithms are instances of a single learning-rule called the
  Bayesian learning rule. The rule unravels a dual perspective yielding new
  adaptive mechanisms for machine-learning based AI systems. My hope is to 
 convince the audience that Bayesian principles are indispensable for an AI
  that learns as efficiently as we do.\n\nMain reference:\n- The Bayesian L
 earning Rule\, (Preprint) M.E. Khan\, H. Rue [ "arXiv":https://arxiv.org/a
 bs/2107.04562 ] [ "Tweet":https://twitter.com/EmtiyazKhan/status/141449892
 2584711171?s=20 ]\n\nAdditional references\n* Knowledge-Adaptation Priors\
 , (NeurIPS 2021) M.E. Khan\, S. Swaroop [ "arXiv":http://arxiv.org/abs/210
 6.08769 ] [ "OpenReview":https://openreview.net/forum?id=_cXX-Dr7sf0 ] [ "
 Slides":https://emtiyaz.github.io/papers/July23_2021_CL_workshop.pdf ] [ "
 Tweet":https://t.co/v68OrIE0Xz?amp=1 ] [ "SlidesLive Video":https://t.co/X
 BRwRh3Lde?amp=1 ]\n* Dual Parameterization of Sparse Variational Gaussian 
 Processes\, (NeurIPS 2021) P. Chang\, V. ADAM\, M.E. Khan\, A. Solin [ "ar
 Xiv":https://arxiv.org/abs/2111.03412 ]\n* Continual Deep Learning by Func
 tional Regularisation of Memorable Past (NeurIPS 2020) P. Pan\, S. Swaroop
 \, A. Immer\, R. Eschenhagen\, R. E. Turner\, M.E. Khan [ "arXiv":https://
 arxiv.org/abs/2004.14070 ] [ "Code":https://github.com/team-approx-bayes/f
 romp ] [ "Poster":https://emtiyaz.github.io/FROMP_NeurIPS2020_poster.pdf ]
 \n* Approximate Inference Turns Deep Networks into Gaussian Processes\, (N
 eurIPS 2019) M.E. Khan\, A. Immer\, E. Abedi\, M. korzepa. [ "arXiv":https
 ://arxiv.org/abs/1906.01930 ] [ "Code":https://github.com/team-approx-baye
 s/dnn2gp ]\n\nBio: Emtiyaz Khan (also known as Emti) is a team leader at t
 he RIKEN center for Advanced Intelligence Project (AIP) in Tokyo where he 
 leads the Approximate Bayesian Inference Team. He is also an external prof
 essor at the Okinawa Institute of Science and Technology (OIST). Previousl
 y\, he was a postdoc and then a scientist at Ecole Polytechnique Fédéral
 e de Lausanne (EPFL)\, where he also taught two large machine learning cou
 rses and received a teaching award. He finished his PhD in machine learnin
 g from University of British Columbia in 2012. The main goal of Emti’s r
 esearch is to understand the principles of learning from data and use them
  to develop algorithms that can learn like living beings. For more than a 
 decade\, his work has focused on developing Bayesian methods that could le
 ad to such fundamental principles. The approximate Bayesian inference team
  now continues to use these principles\, as well as derive new ones\, to s
 olve real-world problems.
LOCATION:Cambridge University Engineering Department\, CBL Seminar room BE
 4-38
END:VEVENT
END:VCALENDAR
