Explanations for Attributing Deep Neural Network Predictions

Given the recent success of deep neural networks and their applications to more high impact and high risk applications, like autonomous driving and healthcare decision-making, there is a great need for faithful and interpretableexplanations of “why” an algorithm is making a certain prediction. In th...

Full description

Saved in:
Bibliographic Details
Published inExplainable AI: Interpreting, Explaining and Visualizing Deep Learning Vol. 11700; pp. 149 - 167
Main Authors Fong, Ruth, Vedaldi, Andrea
Format Book Chapter
LanguageEnglish
Published Switzerland Springer International Publishing AG 2019
Springer International Publishing
SeriesLecture Notes in Computer Science
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Given the recent success of deep neural networks and their applications to more high impact and high risk applications, like autonomous driving and healthcare decision-making, there is a great need for faithful and interpretableexplanations of “why” an algorithm is making a certain prediction. In this chapter, we introduce 1. Meta-Predictors as Explanations, a principled framework for learning explanations for any black box algorithm, and 2. Meaningful Perturbations, an instantiation of our paradigm applied to the problem of attribution, which is concerned with attributing what features of an input (i.e., regions of an input image) are responsible for a model’s output (i.e., a CNN classifier’s object class prediction). We first introduced these contributions in [8]. We also briefly survey existing visual attribution methods and highlight how they faith to be both faithful and interpretable.
ISBN:3030289532
9783030289539
ISSN:0302-9743
1611-3349
DOI:10.1007/978-3-030-28954-6_8