Understanding LLMs via their Generative Successes and Shortcomings.
- đ¤ Speaker: Swabha Swayamdipta, University of Southern California
- đ Date & Time: Thursday 08 February 2024, 16:00 - 17:00
- đ Venue: https://cam-ac-uk.zoom.us/j/97599459216?pwd=QTRsOWZCOXRTREVnbTJBdXVpOXFvdz09
Abstract
Generative capabilities of large language models have grown beyond the wildest imagination of the broader AI research community, leading many to speculate whether these successes may be attributed to the training data or different factors concerning the model. I will present some work from my group which has revealed unique successes and shortcomings in the generative capabilities of LLMs, on knowledge-oriented tasks, tasks with human and social utility and tasks that reveal more than surface-level understanding of language. I will also discuss some aspects of language generation itself and why algorithms like truncation sampling have been so successful.
Series This talk is part of the Language Technology Lab Seminars series.
Included in Lists
- bld31
- Cambridge Centre for Data-Driven Discovery (C2D3)
- Cambridge Forum of Science and Humanities
- Cambridge Language Sciences
- Cambridge talks
- Chris Davis' list
- Guy Emerson's list
- https://cam-ac-uk.zoom.us/j/97599459216?pwd=QTRsOWZCOXRTREVnbTJBdXVpOXFvdz09
- Interested Talks
- Language Sciences for Graduate Students
- Language Technology Lab Seminars
- ndk22's list
- ob366-ai4er
- rp587
- Simon Baker's List
- Trust & Technology Initiative - interesting events
- yk449
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)

Swabha Swayamdipta, University of Southern California
Thursday 08 February 2024, 16:00-17:00