BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:New Advances in Multimodal Reasoning - Prof. Paul Liang (MIT)
DTSTART:20250320T140000Z
DTEND:20250320T150000Z
UID:TALK227170@talks.cam.ac.uk
CONTACT:Shun Shao
DESCRIPTION:Abstract: Today's language models are increasingly capable of 
 reasoning over multiple steps with verification and backtracking to solve 
 challenging problems. However\, multimodal reasoning models that can reaso
 n over an integrated set of modalities such as text\, images\, audio\, vid
 eo\, and knowledge graphs are sorely lacking\, and can pave the way for a 
 next frontier of AI. I will describe our group's work on advancing the fro
 ntiers of multimodal reasoning\, from new multimodal reasoning benchmarks 
 to training multimodal foundation models with modern reasoning approaches\
 , and applications to social understanding and education.\n\nBio: Paul Lia
 ng is an Assistant Professor at the MIT Media Lab and MIT EECS. His resear
 ch advances the foundations of multisensory artificial intelligence to enh
 ance the human experience. He is a recipient of the Siebel Scholars Award\
 , Waibel Presidential Fellowship\, Facebook PhD Fellowship\, Center for ML
  and Health Fellowship\, Rising Stars in Data Science\, and 3 best paper a
 wards. Outside of research\, he received the Alan J. Perlis Graduate Stude
 nt Teaching Award for developing new courses on multimodal machine learnin
 g.
LOCATION:https://cam-ac-uk.zoom.us/j/97599459216?pwd=QTRsOWZCOXRTREVnbTJBd
 XVpOXFvdz09
END:VEVENT
END:VCALENDAR
