Causal Explanation of Convolutional Neural Networks

In this paper we introduce an explanation technique for Convolutional Neural Networks (CNNs) based on the theory of causality by Halpern and Pearl [12]. The causal explanation technique (CexCNN) is based on measuring the filter importance to a CNN decision, which is measured through counterfactual r...

Full description

Saved in:
Bibliographic Details
Published inMachine Learning and Knowledge Discovery in Databases. Research Track Vol. 12976; pp. 633 - 649
Main Author Debbi, Hichem
Format Book Chapter
LanguageEnglish
Published Switzerland Springer International Publishing AG 2021
Springer International Publishing
SeriesLecture Notes in Computer Science
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In this paper we introduce an explanation technique for Convolutional Neural Networks (CNNs) based on the theory of causality by Halpern and Pearl [12]. The causal explanation technique (CexCNN) is based on measuring the filter importance to a CNN decision, which is measured through counterfactual reasoning. In addition, we employ extended definitions of causality, which are responsibility and blame to weight the importance of such filters and project their contribution on input images. Since CNNs form a hierarchical structure, and since causal models can be hierarchically abstracted, we employ this similarity to perform the most important contribution of this paper, which is localizing the important features in the input image that contributed the most to a CNN’s decision. In addition to its ability in localization, we will show that CexCNN can be useful as well for model compression through pruning the less important filters. We tested CexCNN on several CNNs architectures and datasets. (The code is available on https://github.com/HichemDebbi/CexCNN)
ISBN:3030865193
9783030865191
ISSN:0302-9743
1611-3349
DOI:10.1007/978-3-030-86520-7_39