POSTPONED till next week!!!! Work in progress: Making efficient use of language models for theorem proving
- đ¤ Speaker: Albert Qiaochu Jiang (University of Cambridge)
- đ Date & Time: Tuesday 22 February 2022, 13:15 - 14:15
- đ Venue: Zoom
Abstract
Language models are neural networks trained on broad data and adaptable to a wide range of downstream tasks. They have shown impressive capabilities in domains such as NLP , vision, robotics, and reasoning. For mathematical reasoning, several SOTA results have been achieved by language models, but the huge computational cost prevents them from a wider adoption in the formal mathematics community. We introduce two methods to make efficient use of language models for theorem proving – one based on augmenting the language model prompt with previous proof steps and the other based on sampling output candidates of larger quantity and diversity. Our model improved the state-of-the-art theorem proving success rate on the Archive of Formal Proofs from 33.2% to 39.6%, without incurring any further computational cost.
Series This talk is part of the Artificial Intelligence Research Group Talks (Computer Laboratory) series.
Included in Lists
- All Talks (aka the CURE list)
- Artificial Intelligence Research Group Talks (Computer Laboratory)
- bld31
- Cambridge Centre for Data-Driven Discovery (C2D3)
- Cambridge Forum of Science and Humanities
- Cambridge Language Sciences
- Cambridge talks
- Chris Davis' list
- Department of Computer Science and Technology talks and seminars
- Guy Emerson's list
- Hanchen DaDaDash
- Interested Talks
- Martin's interesting talks
- ndk22's list
- ob366-ai4er
- PhD related
- rp587
- School of Technology
- Speech Seminars
- Trust & Technology Initiative - interesting events
- yk373's list
- yk449
- Zoom
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)


Tuesday 22 February 2022, 13:15-14:15