BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Title to be confirmed - Guillaume Hennequin (University of Cambrid
 ge)
DTSTART:20240227T150000Z
DTEND:20240227T170000Z
UID:TALK212728@talks.cam.ac.uk
CONTACT:Puria Radmard
DESCRIPTION:Please join us for our Computational Neuroscience journal club
  on Tuesday 27th February at 3pm UK time in the CBL seminar room\n\nThe ti
 tle is “Alternatives to Backpropagation”\, presented by Youjing Yu and
  Guillaume Hennequin.\n\nSummary:\n\nBackpropagation is one of the most wi
 dely-used algorithms for training neural networks. However\, despite its p
 opularity\, there are several arguments against the use of backpropagation
 \, one of the most important being its biological implausibility. In this 
 journal club meeting\, we are going to take a look at some alternatives de
 veloped to backpropagation.\n\nWe start by digesting the Forward-Forward a
 lgorithm proposed by Geoffrey Hinton [1]. Instead of running one forward p
 ass through the network followed by one backward pass as in backpropagatio
 n\, the Forward-Forward algorithm utilises two forward passes\, one with p
 ositive\, real data and another with negative\, fake data. Each layer in t
 he network has its own objective function\, which is to generate high “g
 oodness” for positive data and low “goodness” for negative data. We 
 will dive into the working principles of the algorithm\, its effectiveness
  on small problems and the associated limitations.\n\nNext\, we will prese
 nt another cool idea that has been independently re-discovered by several 
 labs\, and was perhaps most cleanly articulated in Meulemans et al.\, Neur
 IPS 2022. This idea phrases learning as a least-control problem: a feedbac
 k control loop is set up that continuously keeps the learning system (e.g.
  neural network) in a state of minimum loss\, and learning becomes the pro
 blem of progressively doing away with controls. As it turns out\, gradient
  information is available in the control signals themselves\, such that le
 arning becomes local. We will give a general introduction and history of t
 his idea\, and look into Meulemans et al. in some detail.\n\n[1] Hinton\, 
 Geoffrey. "The forward-forward algorithm: Some preliminary investigations.
 " arXiv preprint arXiv:2212.13345 (2022).\n[2] Meulemans\, Alexander\, et 
 al. "The least-control principle for local learning at equilibrium." Advan
 ces in Neural Information Processing Systems 35 (2022): 33603-33617.
LOCATION:CBL Seminar Room\, Engineering Department\, 4th floor Baker build
 ing
END:VEVENT
END:VCALENDAR
