BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Blind Backdoors in Deep Learning - Eugene Bagdasaryan\, Cornell Te
 ch
DTSTART:20211109T140000Z
DTEND:20211109T150000Z
UID:TALK162874@talks.cam.ac.uk
CONTACT:Jack Hughes
DESCRIPTION:We investigate a new method for injecting backdoors into machi
 ne learning models\, based on compromising the loss-value computation in t
 he model-training code. We use it to demonstrate new classes of backdoors 
 strictly more powerful than those in the prior literature: single-pixel an
 d physical backdoors in ImageNet models\, backdoors that switch the model 
 to a covert\, privacy-violating task\, and backdoors that do not require i
 nference-time input modifications.\n \nOur attack is blind: the attacker c
 annot modify the training data\, nor observe the execution of his code\, n
 or access the resulting model. The attack code creates poisoned training i
 nputs "on the fly\," as the model is training\, and uses multi-objective o
 ptimization to achieve high accuracy on both the main and backdoor tasks. 
 We show how a blind attack can evade any known defense and propose new one
 s.\n\nRECORDING : Please note\, this event will be recorded and will be av
 ailable after the event for an indeterminate period under a CC BY -NC-ND l
 icense. Audience members should bear this in mind before joining the webin
 ar or asking questions.
LOCATION:Webinar
END:VEVENT
END:VCALENDAR
