BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Discovering reward-guided learning strategies from large-scale dat
 asets - Kimberly Stachenfeld (DeepMind\, Columbia)
DTSTART:20250603T100000Z
DTEND:20250603T113000Z
UID:TALK232786@talks.cam.ac.uk
CONTACT:Daniel Kornai
DESCRIPTION:Understanding the neural mechanisms of reward-guided learning 
 is a long-standing goal of computational neuroscience. Recent methodologic
 al innovations enable us to collect ever larger neural and behavioral data
 sets. This presents opportunities to achieve greater understanding of lear
 ning in the brain at scale\, as well as methodological challenges. In the 
 first part of the talk\, I will discuss our recent insights into the mecha
 nisms by which zebra finch songbirds learn to sing. Dopamine has been long
  thought to guide reward-based trial-and-error learning by encoding reward
  prediction errors. However\, it is unknown whether the learning of natura
 l behaviours\, such as developmental vocal learning\, occurs through dopam
 ine-based reinforcement. Longitudinal recordings of dopamine and bird song
 s reveal that dopamine activity is indeed consistent with encoding a rewar
 d prediction error during naturalistic learning. \n\nIn the second part of
  the talk\, I will talk about recent work we are doing at DeepMind to deve
 lop tools for automatically discovering interpretable models of behavior d
 irectly from animal choice data. Our method\, dubbed CogFunSearch\, uses L
 LMs within an evolutionary search process in order to “discover” novel
  models in the form of Python programs that excel at accurately predicting
  animal behavior during reward-guided learning. The discovered programs re
 veal novel patterns of learning and choice behavior that update our unders
 tanding of how the brain solves reinforcement learning problems.
LOCATION:CBL Seminar Room\, Engineering Department\, 4th floor Baker build
 ing
END:VEVENT
END:VCALENDAR
