Realistic Adversarial Machine Learning
- đ¤ Speaker: Nicholas Carlini, Google Brain
- đ Date & Time: Monday 11 November 2019, 13:00 - 14:00
- đ Venue: LT2, Computer Laboratory, William Gates Building
Abstract
While vulnerability of machine learning is extensively studied, most work considers security or privacy in academic settings. This talk studies studies three aspects of recent work on realistic adversarial machine learning, focusing on the “black box” threat model where the adversary has only query access to a remote classifier, but not the complete model itself.
I first study if this black-box threat model can provide apparent robustness to adversarial examples (i.e., test time evasion attacks). Second, I turn to the question of privacy and examine to what extent adversaries can leak sensitive data out of classifiers trained on private data. Finally, I ask to what extent the black-box threat model can be relied upon, and study “model extraction”: attacks that allow an adversary to recover the approximate parameters using only queries.
Series This talk is part of the Computer Laboratory Security Seminar series.
Included in Lists
- All Talks (aka the CURE list)
- bld31
- Cambridge talks
- Computer Laboratory Security Seminar
- Department of Computer Science and Technology talks and seminars
- Interested Talks
- LT2, Computer Laboratory, William Gates Building
- School of Technology
- Security-related talks
- Trust & Technology Initiative - interesting events
- yk449
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)

Nicholas Carlini, Google Brain
Monday 11 November 2019, 13:00-14:00