Pretraining, Instruction Tuning, Alignment: Towards Building Large Language Models from First Principles
- 👤 Speaker: Yao Fu, University of Edinburgh
- 📅 Date & Time: Thursday 25 May 2023, 11:00 - 12:00
- 📍 Venue: https://cam-ac-uk.zoom.us/j/97599459216?pwd=QTRsOWZCOXRTREVnbTJBdXVpOXFvdz09
Abstract
Recently, the field has been greatly impressed and inspired by Large Language Models (LLMs). LLMs’ multi-dimensional abilities are significantly beyond many AI researchers’ and practitioners’ expectations and thus reshaping the AI research paradigm. A natural question is how LLMs get there, and where these fantastic abilities come from. In this talk, we try to dissect the strong LLMs’ capabilities and trace them to their sources. We first review the generic recipe for building large language models from first principles. Then we discuss recipes for improving language models’ reasoning capabilities. Finally, we consider further improvements by complexity-based prompting, distilling chain-of-thought, and learning from AI feedback.
Series This talk is part of the Language Technology Lab Seminars series.
Included in Lists
- bld31
- Cambridge Centre for Data-Driven Discovery (C2D3)
- Cambridge Forum of Science and Humanities
- Cambridge Language Sciences
- Cambridge talks
- Chris Davis' list
- Guy Emerson's list
- https://cam-ac-uk.zoom.us/j/97599459216?pwd=QTRsOWZCOXRTREVnbTJBdXVpOXFvdz09
- Interested Talks
- Language Sciences for Graduate Students
- Language Technology Lab Seminars
- ndk22's list
- ob366-ai4er
- rp587
- Simon Baker's List
- Trust & Technology Initiative - interesting events
- yk449
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)

Yao Fu, University of Edinburgh
Thursday 25 May 2023, 11:00-12:00