From attribution maps to human-understandable explanations through Concept Relevance Propagation

The field of explainable artificial intelligence (XAI) aims to bring transparency to today’s powerful but opaque deep learning models. While local XAI methods explain individual predictions in the form of attribution maps, thereby identifying ‘where’ important features occur (but not providing infor...

Full description

Saved in:
Bibliographic Details
Published inNature machine intelligence Vol. 5; no. 9; pp. 1006 - 1019
Main Authors Achtibat, Reduan, Dreyer, Maximilian, Eisenbraun, Ilona, Bosse, Sebastian, Wiegand, Thomas, Samek, Wojciech, Lapuschkin, Sebastian
Format Journal Article
LanguageEnglish
Published London Nature Publishing Group UK 01.09.2023
Nature Publishing Group
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The field of explainable artificial intelligence (XAI) aims to bring transparency to today’s powerful but opaque deep learning models. While local XAI methods explain individual predictions in the form of attribution maps, thereby identifying ‘where’ important features occur (but not providing information about ‘what’ they represent), global explanation techniques visualize what concepts a model has generally learned to encode. Both types of method thus provide only partial insights and leave the burden of interpreting the model’s reasoning to the user. Here we introduce the Concept Relevance Propagation (CRP) approach, which combines the local and global perspectives and thus allows answering both the ‘where’ and ‘what’ questions for individual predictions. We demonstrate the capability of our method in various settings, showcasing that CRP leads to more human interpretable explanations and provides deep insights into the model’s representation and reasoning through concept atlases, concept-composition analyses, and quantitative investigations of concept subspaces and their role in fine-grained decision-making. Local methods of explainable artificial intelligence identify where important features or inputs occur, while global methods try to understand what features or concepts have been learned by a model. The authors propose a concept-level explanation method that bridges the local and global perspectives, enabling more comprehensive and human-understandable explanations.
ISSN:2522-5839
2522-5839
DOI:10.1038/s42256-023-00711-8