BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Computational Neuroscience Journal Club - Adrianna Loback (Control
  Group)
DTSTART:20180529T150000Z
DTEND:20180529T160000Z
UID:TALK106438@talks.cam.ac.uk
CONTACT:Rodrigo Echeveste
DESCRIPTION:Adrianna Loback will cover:\n\n• A Dynamic Connectome Suppor
 ts the Emergence of Stable Computational Function of Neural Circuits throu
 gh Reward-Based Learning\n\n• David Kappel\, Robert Legenstein\, Stefan 
 Habenschuss\, Michael Hsieh and Wolfgang Maass\n\n• eNeuro (in press\, 2
 018)\n\n• http://www.eneuro.org/content/eneuro/early/2018/04/02/ENEURO.0
 301-17.2018.full.pdf\n\n\nVersion of the manuscript with in-line figures:\
 n\nhttps://arxiv.org/pdf/1704.04238.pdf\n\nAbstract: Synaptic connections 
 between neurons in the brain are dynamic because of continuously ongoing s
 pine dynamics\, axonal sprouting\, and other processes. In fact\, it was r
 ecently shown that the spontaneous synapse-autonomous component of spine d
 ynamics is at least as large as the component that depends on the history 
 of pre- and postsynaptic neural activity. These data are inconsistent with
  common models for network plasticity\, and raise the questions how neural
  circuits can maintain a stable computational function in spite of these c
 ontinuously ongoing processes\, and what functional uses these ongoing pro
 cesses might have. Here\, we present a rigorous theoretical framework for 
 these seemingly stochastic spine dynamics and rewiring processes in the co
 ntext of reward-based learning tasks. We show that spontaneous synapse-aut
 onomous processes\, in combination with reward signals such as dopamine\, 
 can explain the capability of networks of neurons in the brain to configur
 e themselves for specific computational tasks\, and to compensate automati
 cally for later changes in the network or task. Furthermore we show theore
 tically and through computer simulations that stable computational perform
 ance is compatible with continuously ongoing synapse-autonomous changes. A
 fter reaching good computational performance it causes primarily a slow dr
 ift of network architecture and dynamics in task-irrelevant dimensions\, a
 s observed for neural activity in motor cortex and other areas. On the mor
 e abstract level of reinforcement learning the resulting model gives rise 
 to an understanding of reward-driven network plasticity as continuous samp
 ling of network configurations. 
LOCATION:Cambridge University Engineering Department\, CBL\, BE4-38 (http:
 //learning.eng.cam.ac.uk/Public/Directions)
END:VEVENT
END:VCALENDAR
