BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Interpretability in Machine Learning - Adrian Weller\; Tameem Adel
  Hesham
DTSTART:20171109T133000Z
DTEND:20171109T150000Z
UID:TALK94216@talks.cam.ac.uk
CONTACT:Alessandro Davide Ialongo
DESCRIPTION:*Abstract:*\n\nInterpretability is often considered crucial fo
 r enabling effective real-world deployment of intelligent systems. Unlike 
 performance measures such as accuracy\, objective measurement criteria for
  interpretability are difficult to identify. The volume of research on int
 erpretability is rapidly growing (more than 20\,000 publications related t
 o interpretability in ML in the last five years can be found through Googl
 e Scholar). However\, there is still little consensus on what interpretabi
 lity is\, how to measure and evaluate it\, and how to control it. There is
  an urgent need for most of these issues to be rigorously defined and acti
 vated. Recent European Union regulation will require algorithms that make 
 decisions based on user-level predictors\, which significantly affect user
 s to provide explanation ("right to explanation") by 2018 (GDPR). One of t
 he taxonomies of interpretability in ML includes global and local interpre
 tability algorithms. The former aims at getting a general understanding of
  how the system is working as a whole\, and at knowing what patterns are p
 resent in the data. On the other hand\, local interpretability provides an
  explanation of a particular prediction or decision.\n\nWe take a look her
 e at two algorithms each belonging to one of the aforementioned categories
 . The prediction difference analysis method presents an algorithm for visu
 alizing the response of a deep neural network to a specific input. When cl
 assifying images\, the method highlights areas in a given input image that
  provide evidence for or against a certain class. We also check an algorit
 hm that facilitates human understanding and reasoning of a dataset via lea
 rning prototypes and criticism. The method is referred to as MMD-critic\, 
 and it is motivated by the Bayesian model criticism framework.\n\n\n*Recom
 mended reading:*\n\n* "Towards A Rigorous Science of Interpretable Machine
  Learning"\, Finale Doshi-Velez\, Been Kim\, arXiv 2017.\n\n* "Visualizing
  Deep Neural Network Decisions: Prediction Difference Analysis"\, Luisa Zi
 ntgraf\, Taco Cohen\, Tameem Adel\, Max Welling\, ICLR 2017.\n\n* "Example
 s are not Enough\, Learn to Criticize! Criticism for Interpretability"\, B
 een Kim\, Rajiv Khanna\, Oluwasanmi Koyejo\, NIPS 2016.
LOCATION:Engineering Department\, CBL Seminar Room 4-38
END:VEVENT
END:VCALENDAR
