University of Cambridge > Talks.cam > Language Technology Lab Seminars > Can Sparsity Lead to Efficient LLMs?

Can Sparsity Lead to Efficient LLMs?

Download to your calendar using vCal

If you have a question about this talk, please contact Panagiotis Fytas .

The rapid advancements in Large Language Models (LLMs) have revolutionized various natural language processing tasks. However, the substantial size of LLMs presents significant challenges in training, fine-tuning, and deployment. In this talk, I will discuss how sparsity, a fundamental characteristic in neural networks, can be leveraged to enhance LLM efficiency. The presentation will cover recent advances in LLM pruning, parameter-efficient fine-tuning, centered on the principle: Not Every Layer in LLMs is Worth Equal Computing.

This talk is part of the Language Technology Lab Seminars series.

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

Š 2006-2025 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity