BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Pretraining\, Instruction Tuning\, Alignment: Towards Building Lar
 ge Language Models from First Principles - Yao Fu\, University of Edinburg
 h
DTSTART:20230525T100000Z
DTEND:20230525T110000Z
UID:TALK201511@talks.cam.ac.uk
CONTACT:Panagiotis Fytas
DESCRIPTION:Recently\, the field has been greatly impressed and inspired b
 y Large Language Models (LLMs). LLMs' multi-dimensional abilities are sign
 ificantly beyond many AI researchers’ and practitioners’ expectations 
 and thus reshaping the AI research paradigm. A natural question is how LLM
 s get there\, and where these fantastic abilities come from. In this talk\
 , we try to dissect the strong LLMs' capabilities and trace them to their 
 sources. We first review the generic recipe for building large language mo
 dels from first principles. Then we discuss recipes for improving language
  models' reasoning capabilities. Finally\, we consider further improvement
 s by complexity-based prompting\, distilling chain-of-thought\, and learni
 ng from AI feedback.
LOCATION:https://cam-ac-uk.zoom.us/j/97599459216?pwd=QTRsOWZCOXRTREVnbTJBd
 XVpOXFvdz09
END:VEVENT
END:VCALENDAR
