BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Relevance Forcing: More Interpretable Neural Networks through Prio
 r Knowledge -  Christian Etmann (University of Bremen)
DTSTART:20180509T143000Z
DTEND:20180509T153000Z
UID:TALK105463@talks.cam.ac.uk
CONTACT:Rachel Furner
DESCRIPTION:Neural networks are able to reach high accuracies across many 
 different classification tasks. However\, these 'black-box models' suffer 
 from one drawback: it is generally difficult to assess how the network rea
 ched its classification decision. Nevertheless\, through different relevan
 ce measures\, it is possible to determine which parts of the given input c
 ontribute to the resulting output. By imposing certain penalties on this r
 elevance\, through which we can encode prior information about the problem
  domain\, we can train models which take this information into account. If
  we view these relevance measures as discretized dynamical systems\, we ma
 y get some insight on the reliability of their explanation.
LOCATION:MR3 Centre for Mathematical Sciences
END:VEVENT
END:VCALENDAR
