BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Realistic Adversarial Machine Learning - Nicholas Carlini\, Google
  Brain
DTSTART:20191111T130000Z
DTEND:20191111T140000Z
UID:TALK134758@talks.cam.ac.uk
CONTACT:Jack Hughes
DESCRIPTION:While vulnerability of machine learning is extensively studied
 \,\nmost work considers security or privacy in academic settings.\nThis ta
 lk studies studies three aspects of recent work on\nrealistic adversarial 
 machine learning\, focusing on the "black\nbox" threat model where the adv
 ersary has only query access\nto a remote classifier\, but not the complet
 e model itself.\n\nI first study if this black-box threat model can provid
 e apparent\nrobustness to adversarial examples (i.e.\, test time evasion\n
 attacks). Second\, I turn to the question of privacy and examine\nto what 
 extent adversaries can leak sensitive data out of\nclassifiers trained on 
 private data. Finally\, I ask to what extent\nthe black-box threat model c
 an be relied upon\, and study\n"model extraction": attacks that allow an a
 dversary to recover\nthe approximate parameters using only queries.
LOCATION:LT2\, Computer Laboratory\, William Gates Building
END:VEVENT
END:VCALENDAR
