Relevance Forcing: More Interpretable Neural Networks through Prior Knowledge
- đ¤ Speaker: Christian Etmann (University of Bremen)
- đ Date & Time: Wednesday 09 May 2018, 15:30 - 16:30
- đ Venue: MR3 Centre for Mathematical Sciences
Abstract
Neural networks are able to reach high accuracies across many different classification tasks. However, these ‘black-box models’ suffer from one drawback: it is generally difficult to assess how the network reached its classification decision. Nevertheless, through different relevance measures, it is possible to determine which parts of the given input contribute to the resulting output. By imposing certain penalties on this relevance, through which we can encode prior information about the problem domain, we can train models which take this information into account. If we view these relevance measures as discretized dynamical systems, we may get some insight on the reliability of their explanation.
Series This talk is part of the CCIMI Seminars series.
Included in Lists
- All CMS events
- All Talks (aka the CURE list)
- bld31
- Cambridge Centre for Data-Driven Discovery (C2D3)
- Cambridge talks
- CCIMI
- CCIMI Seminars
- Chris Davis' list
- CMS Events
- custom
- DPMMS info aggregator
- DPMMS lists
- DPMMS Lists
- Guy Emerson's list
- Hanchen DaDaDash
- Interested Talks
- MR3 Centre for Mathematical Sciences
- ndk22's list
- ob366-ai4er
- rp587
- School of Physical Sciences
- Statistical Laboratory info aggregator
- Trust & Technology Initiative - interesting events
- yk449
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)

Christian Etmann (University of Bremen)
Wednesday 09 May 2018, 15:30-16:30