Large Language Models, Model Collapse, and the Conservation of Information
- đ¤ Speaker: George Montanez
- đ Date & Time: Tuesday 17 February 2026, 15:00 - 16:00
- đ Venue: Computer Laboratory, William Gates Building, Room LT1
Abstract
Do Large Language Models (LLMs) think and reason? Are they perpetual information machines, producing endless coherent and correct text from finite training data? We explore how LLMs work and whether they produce rational thought and endless information. We show how theoretical considerations and experimental results from philosophy, statistics, information theory, and machine learning argue against the thesis that LLMs are rational, information-generating entities.
Series This talk is part of the Foundation AI series.
Included in Lists
- All Talks (aka the CURE list)
- Artificial Intelligence Research Group Talks (Computer Laboratory)
- bld31
- Cambridge Centre for Data-Driven Discovery (C2D3)
- Cambridge Forum of Science and Humanities
- Cambridge Language Sciences
- Cambridge talks
- Chris Davis' list
- Computer Laboratory, William Gates Building, Room LT1
- Department of Computer Science and Technology talks and seminars
- Guy Emerson's list
- Hanchen DaDaDash
- Interested Talks
- Martin's interesting talks
- ndk22's list
- ob366-ai4er
- PhD related
- rp587
- School of Technology
- Speech Seminars
- Trust & Technology Initiative - interesting events
- yk373's list
- yk449
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)


Tuesday 17 February 2026, 15:00-16:00