University of Cambridge > Talks.cam > Computational Neuroscience > Lazy-rich learning

Lazy-rich learning

Download to your calendar using vCal

If you have a question about this talk, please contact .

Lazy-rich learning regime dichotomy is a crucial principle underlying learning theory in both biological and artificial agents. Neural networks in the lazy regime are characterized by minimal weight changes, fast learning, and high-dimensional representations at convergence corresponding to kernel regression with the Neural Tangent Kernel, whereas rich representations are characterized by lower-dimensional feature learning with slow learning and larger (feature) gradients. We will first briefly review these concepts as presented in Farrell et al. (https://www.sciencedirect.com/science/article/pii/S0959438823001058) and then discuss evidence of rich neural representations in humans and macaques trained to perform context-dependent decision-making. We will then review a recent ICML ’25 paper by Chou et al. (https://arxiv.org/pdf/2503.18114) that demonstrates theoretically and empirically that the laziness-richness of learning regime can be evaluated using experimentally accessible metrics of representational geometry rather than via probing individual neurons, synapses or features.

This talk is part of the Computational Neuroscience series.

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

Š 2006-2025 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity