BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Scaling Multi-Agent Reinforcement Learning to the Mean-Field Regim
 e - Batu Yardim (ETH Zürich)
DTSTART:20251112T144000Z
DTEND:20251112T152000Z
UID:TALK238504@talks.cam.ac.uk
DESCRIPTION:Reinforcement Learning (RL) has achieved remarkable success es
 pecially when combined with deep learning\, however\, scaling RL beyond th
 e single-agent setting remains a major challenge. In particular\, the &ldq
 uo\;curse of many agents&rdquo\; hinders the application of RL to systems 
 with thousands or even millions of interacting participants. Such large-sc
 ale problems arise naturally in domains like financial markets\, auctions\
 , traffic/resource management\, and social systems\, where optimal decisio
 n-making and computation quickly become intractable. We explore mean-field
  reinforcement learning (MF-RL) as a principled framework to address this 
 challenge under the agent exchangeability assumption. Our work extends the
  theoretical foundations of MF-RL with an emphasis on computational aspect
 s and realworld applicability. Specifically\, we analyze mean-field approx
 imation properties\, study communication and coordination bottlenecks duri
 ng learning\, and examine the computational and statistical complexity of 
 scaling RL to the mean-field regime. Finally\, we highlight applications t
 o large-scale incentive design and resource allocation\, demonstrating how
  MF-RL can serve as a bridge between mean-field theory and practical multi
 -agent RL algorithms.
LOCATION:Seminar Room 1\, Newton Institute
END:VEVENT
END:VCALENDAR
