Gradient-Based Attribution Methods

The problem of explaining complex machine learning models, including Deep Neural Networks, has gained increasing attention over the last few years. While several methods have been proposed to explain network predictions, the definition itself of explanation is still debated. Moreover, only a few att...

Full description

Saved in:
Bibliographic Details
Published inExplainable AI: Interpreting, Explaining and Visualizing Deep Learning Vol. 11700; pp. 169 - 191
Main Authors Ancona, Marco, Ceolini, Enea, Öztireli, Cengiz, Gross, Markus
Format Book Chapter
LanguageEnglish
Published Switzerland Springer International Publishing AG 2019
Springer International Publishing
SeriesLecture Notes in Computer Science
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The problem of explaining complex machine learning models, including Deep Neural Networks, has gained increasing attention over the last few years. While several methods have been proposed to explain network predictions, the definition itself of explanation is still debated. Moreover, only a few attempts to compare explanation methods from a theoretical perspective has been done. In this chapter, we discuss the theoretical properties of several attribution methods and show how they share the same idea of using the gradient information as a descriptive factor for the functioning of a model. Finally, we discuss the strengths and limitations of these methods and compare them with available alternatives.
ISBN:3030289532
9783030289539
ISSN:0302-9743
1611-3349
DOI:10.1007/978-3-030-28954-6_9