BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Interpretability in Machine Learning: What it means\, How we're ge
 tting there  - Finale Doshi-Velez\, Harvard University
DTSTART:20180717T120000Z
DTEND:20180717T130000Z
UID:TALK108268@talks.cam.ac.uk
CONTACT:Microsoft Research Cambridge Talks Admins
DESCRIPTION:As machine learning systems become ubiquitous\, there is a gro
 wing interest in interpretable machine learning -- that is\, systems that 
 can provide human-interpretable rationale for their predictions and decisi
 ons. In this talk\, I'll first give two examples of real healthcare settin
 gs -- mortality modeling in the ICU and treatment response in major depres
 sion -- where the ability to interpret learned models is essential\, and d
 escribe how we built models to meet those needs. Next\, I'll speak about s
 ome of the work we are doing to understand interpretability more broadly: 
 what exactly makes a model interpretable? And can we optimize for it? By f
 ormalizing these notions\, we can hope to identify universals of interpret
 ability and also rigorously compare different kinds of systems for produci
 ng algorithmic explanations.\nIncludes joint work with Been Kim\, Andrew R
 oss\, Mike Wu\, Michael Hughes\, Menaka Narayanan\, Sam Gershman\, Emily C
 hen\, Jeffrey He\, Isaac Lage\, Roy Perlis\, Tom McCoy\, Gabe Hope\, Leah 
 Weiner\, Erik Sudderth\, Sonali Parbhoo\, Marzyeh Ghassemi\, Pete Szolovit
 s\, Mornin Feng\, Leo Celi\, Nicole Brimmer\, Tristan Naumann\, Rohit Josh
 i\, Anna Rumshisky\, and the Berkman Klein Center. \n
LOCATION:Auditorium\, Microsoft Research Ltd\, 21 Station Road\, Cambridge
 \, CB1 2FB
END:VEVENT
END:VCALENDAR
