Can Sparsity Lead to Efficient LLMs?
- đ¤ Speaker: Shiwei Liu, University of Oxford
- đ Date & Time: Thursday 13 June 2024, 11:00 - 12:00
- đ Venue: Faculty of English, Room SR24
Abstract
The rapid advancements in Large Language Models (LLMs) have revolutionized various natural language processing tasks. However, the substantial size of LLMs presents significant challenges in training, fine-tuning, and deployment. In this talk, I will discuss how sparsity, a fundamental characteristic in neural networks, can be leveraged to enhance LLM efficiency. The presentation will cover recent advances in LLM pruning, parameter-efficient fine-tuning, centered on the principle: Not Every Layer in LLMs is Worth Equal Computing.
Series This talk is part of the Language Technology Lab Seminars series.
Included in Lists
- bld31
- Cambridge Centre for Data-Driven Discovery (C2D3)
- Cambridge Forum of Science and Humanities
- Cambridge Language Sciences
- Cambridge talks
- Chris Davis' list
- Faculty of English, Room SR24
- Guy Emerson's list
- Interested Talks
- Language Sciences for Graduate Students
- Language Technology Lab Seminars
- ndk22's list
- ob366-ai4er
- rp587
- Simon Baker's List
- Trust & Technology Initiative - interesting events
- yk449
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)

Shiwei Liu, University of Oxford
Thursday 13 June 2024, 11:00-12:00