BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Provably Safe Certification for Machine Learning Models under Adve
 rsarial Attacks - Prof. Miguel Rodrigues\, UCL
DTSTART:20231122T140000Z
DTEND:20231122T150000Z
UID:TALK205279@talks.cam.ac.uk
CONTACT:Prof. Ramji Venkataramanan
DESCRIPTION:It is widely known that state-of-the-art machine learning mode
 ls — including vision and language ones — can be seriously compromised
  by adversarial perturbations\, so it is also increasingly relevant to dev
 elop capability to certify their performance in the presence of the most e
 ffective adversarial attacks. \n\n\nThis talk will introduce an approach i
 nspired by distribution-free risk controlling procedures to certify the pe
 rformance of machine learning models in the presence of adversarial attack
 s\, with population level risk guarantees. In particular\, given a specifi
 c attack\, we will introduce the notion of a machine learning model (alpha
 \, zeta)—safety guarantee: this guarantee\, which is supported by a test
 ing procedure based on the availability of a calibration set\, entails one
  will only declare that a machine learning model adversarial (population) 
 risk is less than alpha (i.e. the model is safe) given that the model adve
 rsarial (population) risk is higher than alpha (i.e. the model is in fact 
 unsafe)\, with probability less than zeta. We will also introduce Bayesian
  optimization oriented approaches to determine very efficiently whether or
  not a machine learning model is (alpha\, zeta)-safe in the presence of an
  adversarial attack\, along with their statistical guarantees. \n\n\nThis 
 talk will also illustrate how to apply our framework to a range of machine
  learning models — including various sizes of vision Transformer (ViT) a
 nd ResNet models — impaired by a variety of adversarial attacks.\n\n
LOCATION:MR5\, CMS Pavilion A
END:VEVENT
END:VCALENDAR
