BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:From Wearable Sensing to Contextual AI: An Egocentric Perspective 
 - Chi Ian Tang\, Meta
DTSTART:20260310T140000Z
DTEND:20260310T150000Z
UID:TALK235900@talks.cam.ac.uk
CONTACT:Cecilia Mascolo
DESCRIPTION:Abstract:\nThe long-held vision of wearable computing is to mo
 ve beyond simple activity tracking towards proactive\, intelligent assista
 nce. However\, achieving true contextual understanding on resource-constra
 ined devices like smart glasses remains a substantial challenge.\n\nThis t
 alk charts a path from foundational wearable sensing to the future of cont
 extual AI\, framed through an egocentric perspective. It begins by discuss
 ing the evolution from traditional mobile sensing to the rich\, multimodal
  data streams enabled by modern research platforms like Project Aria Glass
 es and large-scale egocentric datasets. The talk then examines the technic
 al building blocks required for real-world contextual understanding\, shif
 ting from basic activity recognition to complex acoustic scene analysis an
 d targeted speech enhancement that solves the "cocktail party problem" in 
 social settings.\n\nFinally\, it connects these capabilities to the broade
 r vision for a future where AI-powered eyewear can understand its wearer's
  environment\, model social context\, and effectively serve as a digital e
 xtension of human memory and perception. Throughout\, the presentation bri
 dges the gap between academic research and product deployment\, making the
  case that the convergence of egocentric sensing\, on-device AI\, and cont
 extual understanding is poised to redefine how we interact with the world 
 around us.\n\nBio:\nChi Ian Tang is a Senior Research Scientist at Meta Re
 ality Labs\, working on the foundational AI that powers smart glasses. He 
 bridges the gap between academic research and consumer product - a journey
  that began with his PhD at the University of Cambridge Mobile Systems Res
 earch Lab\, where he developed novel self-supervised and continual learnin
 g methods for wearable sensing\, and continued at Nokia Bell Labs\, where 
 he focused on multimodal analysis for longitudinal health insights.\n\nTod
 ay at Meta Reality Labs\, he tackles real-world perceptual challenges on r
 esource-constrained devices\, contributing to core audio AI capabilities a
 nd building on features like Conversation Focus. His work has been publish
 ed extensively at top-tier venues including ICML\, IMWUT\, and ICASSP\, pi
 oneering approaches in self-supervised learning for mobile sensing. As an 
 active member of the pervasive computing community\, he regularly organise
 s workshops and tutorials on advancing human sensing and serves as an Asso
 ciate Editor for the ACM IMWUT journal. His long-term research goal is to 
 close the gap between human perception and machine understanding\, enablin
 g wearable AI that can see\, hear\, and reason about the world as we do.\n
 \nMore information can be found at: https://iantang.co/ 
LOCATION:Computer Lab\, LT2 and Online
END:VEVENT
END:VCALENDAR
