BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:LLM Generalization in Social Context - Diyi Yang\, Stanford Univer
 sity
DTSTART:20241024T150000Z
DTEND:20241024T160000Z
UID:TALK222271@talks.cam.ac.uk
CONTACT:Tiancheng Hu
DESCRIPTION:Abstract:\n\nThe successes of large language models (LLMs) hav
 e transformed many domains\, yet they do not always generalize well across
  all contexts\, particularly in areas where social factors are involved. T
 he talk examines LLM generalizations in social context from three perspect
 ives: assessment\, adaptation\, and application. We first present a dynami
 c evaluation protocol based on directed acyclic graphs with varying comple
 xity for assessing LLMs on many types of reasoning tasks. Then we explore 
 how to adapt LLMs to be more socially generalizable by building culturally
  aware language technologies with an online-community driven knowledge bas
 e. Lastly\, we discuss how to customize LLMs for social skill training in 
 a variety of social contexts. Overall\, we hope to provide insights into h
 ow LLM generalizes in social contexts and how to develop socially intellig
 ent LLMs.\n\n \n\nBio:\n\nDiyi Yang is an assistant professor in the Compu
 ter Science Department at Stanford University. Her research focuses on hum
 an-centered natural language processing and computational social science. 
  She is a recipient of  Microsoft Research Faculty Fellowship (2021)\,  NS
 F CAREER Award (2022)\, an ONR Young Investigator Award (2023)\, and a Slo
 an Research Fellowship (2024).   Her work has received multiple paper awar
 ds or nominations at top NLP and HCI conferences. \n\n \n\n
LOCATION:https://cam-ac-uk.zoom.us/j/97599459216?pwd=QTRsOWZCOXRTREVnbTJBd
 XVpOXFvdz09
END:VEVENT
END:VCALENDAR
