Catastrophic Forgetting and Explainable AI in Large-Scale Models for Neuroscience
- đ¤ Speaker: Dr. Michail Mamalakis (University of Cambridge)
- đ Date & Time: Tuesday 17 February 2026, 16:00 - 17:00
- đ Venue: Computer Laboratory, William Gates Building, Room LT1
Abstract
This seminar examines the mechanisms of catastrophic forgetting in large-scale AI systems, with particular emphasis on applications in neuroscience. We explore how continual learning on real-world data can lead to knowledge degradation, where sequential training progressively erodes previously acquired representations. Current mitigation approaches such as replay strategies, parameter regularization methods like Elastic Weight Consolidation (EWC), gradient-based protection techniques, and context-dependent learning are discussed in the context of medical and neuroimaging foundation models. Finally, we consider practical and conceptual strategies to reduce forgetting and support stable, long-term learning in large neuroscience models.
Series This talk is part of the Foundation AI series.
Included in Lists
- All Talks (aka the CURE list)
- Artificial Intelligence Research Group Talks (Computer Laboratory)
- bld31
- Cambridge Centre for Data-Driven Discovery (C2D3)
- Cambridge Forum of Science and Humanities
- Cambridge Language Sciences
- Cambridge talks
- Chris Davis' list
- Computer Laboratory, William Gates Building, Room LT1
- Department of Computer Science and Technology talks and seminars
- Guy Emerson's list
- Hanchen DaDaDash
- Interested Talks
- Martin's interesting talks
- ndk22's list
- ob366-ai4er
- PhD related
- rp587
- School of Technology
- Speech Seminars
- Trust & Technology Initiative - interesting events
- yk373's list
- yk449
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)


Tuesday 17 February 2026, 16:00-17:00