BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Positional encodings in LLMs - Valeria Ruscio
DTSTART:20260604T160000Z
DTEND:20260604T164500Z
UID:TALK232558@talks.cam.ac.uk
CONTACT:Pietro Lio
DESCRIPTION:Positional encodings are essential for transformer-based langu
 age models to understand sequence order\, yet their influence extends far 
 beyond simple position tracking. This talk explores the landscape of posit
 ional encoding methods in LLMs and reveals surprising insights about how t
 hese architectural choices shape model behavior.\n\nWe begin with the fund
 amental challenge: why attention mechanisms require explicit positional in
 formation. We then survey the evolution of encoding strategies\, from sinu
 soidal approaches to modern techniques like RoPE\, examining their archite
 ctural implications and trade-offs.\n\nThe talk delves into how these diff
 erent encoding strategies fundamentally shape model architectures and repr
 esentations. We analyze the specific limitations and trade-offs of each ap
 proach\, examining how positional information propagates through transform
 er layers and influences the learned representations. \n\n\n"Watch it remo
 tely":https://meet.google.com/vch-pxrb-htz
LOCATION:Lecture Theatre 2\, Computer Laboratory\, William Gates Building
END:VEVENT
END:VCALENDAR
