Towards Meaningful Stochastic Defences in Machine Learning
- đ¤ Speaker: Ilia Shumailov, University of Oxford
- đ Date & Time: Tuesday 15 November 2022, 14:00 - 15:00
- đ Venue: Webinar & FW11, Computer Laboratory, William Gates Building.
Abstract
Machine learning (ML) has proven to be more fragile than previously thought, especially in adversarial settings. A capable adversary can cause ML systems to break at training, inference, and deployment stages. In this talk, I will cover the recent work on attacking and defending machine learning pipelines using stochastic defences; I will describe how, seemingly powerful defences fail to provide any security and end up being vulnerable to even standard attackers. I will then demonstrate a number of possible randomness-based defences that can provide theoretical and practical performance improvements.
Bio: Ilia Shumailov holds a PhD in Computer Science from University of Cambridge, specialising in Machine Learning and Computer Security. During the PhD under the supervision of Prof Ross Anderson Ilia has worked on a number of projects spanning the fields of machine learning security, cybercrime analysis and signal processing. Following the PhD, Ilia joined Vector Institute in Canada as a Postdoctoral Fellow, where he worked under the supervision of Prof Nicolas Papernot and Prof Kassem Fawaz. Ilia is currently a Junior Research Fellow at Christ Church, University of Oxford.
Series This talk is part of the Computer Laboratory Security Seminar series.
Included in Lists
- All Talks (aka the CURE list)
- bld31
- Cambridge talks
- Computer Laboratory Security Seminar
- Department of Computer Science and Technology talks and seminars
- Interested Talks
- School of Technology
- Security-related talks
- Trust & Technology Initiative - interesting events
- Webinar & FW11, Computer Laboratory, William Gates Building.
- yk449
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)


Tuesday 15 November 2022, 14:00-15:00