University of Cambridge > Talks.cam > Computational Neuroscience > Compositional Generalization and Learning

Compositional Generalization and Learning

Download to your calendar using vCal

If you have a question about this talk, please contact .

Cognition is highly flexible—we perform many different tasks and continually adapt our behaviour to changing demands. One way to flexibly switch between tasks and rapidly learn new tasks is through reuse of neural representations and computational components. In the first part of the journal club, Rui will present a paper [1] where it is shown that such compositionally is found in the monkey brain when performing three compositionally related tasks. In neural recordings, the authors found that task-relevant information about stimulus features and motor actions were represented in subspaces of neural activity that were shared across tasks. Monkeys adapted to changes in the task by iteratively updating their internal belief about the current task and then, based on this belief, flexibly engaging the shared sensory and motor subspaces relevant to the task.

However, the paper contains no attempts to model subject behaviour, and makes no suggestions for how such modelling could be done. Therefore, in the second half of the talk, Daniel will introduce “infinite compositional contextual bandits”, a class of Bayesian models which are currently being developed to investigate the effects of compositional generalisation on decision-making during continual learning in animals.

[1] Tafazoli, S., Bouchacourt, F. M., Ardalan, A., Markov, N. T., Uchimura, M., Mattar, M. G., ... & Buschman, T. J. (2025). Building compositional tasks with shared neural subspaces. Nature, 1-9.

This talk is part of the Computational Neuroscience series.

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2025 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity