BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Machine Learning needs Better Randomness Standards: Randomised Smo
 othing and PRNG-based attacks - Pranav Dahiya\, University of Cambridge
DTSTART:20231124T160000Z
DTEND:20231124T170000Z
UID:TALK208927@talks.cam.ac.uk
CONTACT:Hridoy Sankar Dutta
DESCRIPTION:Randomness supports many critical functions in the field of ma
 chine learning (ML) including optimisation\, data selection\, privacy\, an
 d security. ML systems outsource the task of generating or harvesting rand
 omness to the compiler\, the cloud service provider or elsewhere in the to
 olchain. Yet there is a long history of attackers exploiting poor randomne
 ss\, or even creating it – as when the NSA put backdoors in random numbe
 r generators to break cryptography. In this paper we consider whether atta
 ckers can compromise an ML system using only the randomness on which they 
 commonly rely. We focus our effort on Randomised Smoothing\, a popular app
 roach to train certifiably robust models\, and to certify specific input d
 atapoints of an arbitrary model. We choose Randomised Smoothing since it i
 s used for both security and safety – to counteract adversarial examples
  and quantify uncertainty respectively. Under the hood\, it relies on samp
 ling Gaussian noise to explore the volume around a data point to certify t
 hat a model is not vulnerable to adversarial examples. We demonstrate an e
 ntirely novel attack\, where an attacker backdoors the supplied randomness
  to falsely certify either an overestimate or an underestimate of robustne
 ss for up to 81 times. We demonstrate that such attacks are possible\, tha
 t they require very small changes to randomness to succeed\, and that they
  are hard to detect. As an example\, we hide an attack in the random numbe
 r generator and show that the randomness tests suggested by NIST fail to d
 etect it. We advocate updating the NIST guidelines on random number testin
 g to make them more appropriate for safety-critical and security-critical 
 machine-learning applications.\n\nRECORDING : Please note\, this event wil
 l be recorded and will be available after the event for an indeterminate p
 eriod under a CC BY -NC-ND license. Audience members should bear this in m
 ind before joining the webinar or asking questions.
LOCATION:Webinar &amp\; FW11\, Computer Laboratory\, William Gates Buildin
 g.
END:VEVENT
END:VCALENDAR
