BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:On the Effectiveness of Generating Adversarial Examples for Evadin
 g Blackbox Malware Classifiers - Dr Sadia Afroz\, ICSI\, UC Berkeley\, Ava
 st
DTSTART:20200217T130000Z
DTEND:20200217T140000Z
UID:TALK139780@talks.cam.ac.uk
CONTACT:Jack Hughes
DESCRIPTION:Recent advances in adversarial attacks have shown that machine
  learning classifiers based on static analysis are vulnerable to adversari
 al attacks. However\, real-world antivirus systems do not rely only on sta
 tic classifiers\, thus many of these static evasions get detected by dynam
 ic analysis whenever the malware runs. The real question is to what extent
  these adversarial attacks are actually harmful to the real users? In this
  paper\, we propose a systematic framework to create and evaluate realisti
 c adversarial malware to evade real-world systems. We propose new adversar
 ial attacks against real-world antivirus systems based on code randomizati
 on and binary manipulation and use our framework to perform the attacks on
  1000 malware samples and test 4 commercial antivirus software and 1 open-
 source classifier. We demonstrate that the static detectors of real-world 
 antivirus can be evaded by changing only 1 byte in some malware samples an
 d that many of the adversarial attacks are transferable between different 
 antivirus. We also tested the efficacy of the complete (i.e. static + dyna
 mic) classifiers in protecting users. While most of the commercial antivir
 us use their dynamic engines to protect the users’ device when the stati
 c classifiers are evaded\, we are the first to demonstrate that for one co
 mmercial antivirus\, static evasions can also evade the offline dynamic de
 tectors and infect users’ machines. We discover a new attack surface for
  adversarial examples that can cause harm to real users.\n\nBio:\n\nSadia 
 Afroz is a research scientist at the International Computer Science Instit
 ute (ICSI) and Avast Software. Her work focuses on anti-censorship\, anony
 mity\, and adversarial learning. Her work on adversarial authorship attrib
 ution received the 2013 Privacy Enhancing Technology (PET) award\, the bes
 t student paper award at the 2012 Privacy Enhancing Technology Symposium (
 PETS)\, and the 2014 ACM SIGSAC dissertation award (runner-up). More about
  her research can be found: http://www1.icsi.berkeley.edu/~sadia/
LOCATION:LT2\, Computer Laboratory\, William Gates Building
END:VEVENT
END:VCALENDAR
