Causal generative explainers using counterfactual inference: a case study on the Morpho-MNIST dataset

In this paper, we propose leveraging causal generative learning as an interpretable tool for explaining image classifiers. Specifically, we present a generative counterfactual inference approach to study the influence of visual features (pixels) as well as causal factors through generative learning....

Full description

Saved in:
Bibliographic Details
Published inPattern analysis and applications : PAA Vol. 27; no. 3
Main Authors Taylor-Melanson, Will, Sadeghi, Zahra, Matwin, Stan
Format Journal Article
LanguageEnglish
Published London Springer London 01.09.2024
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In this paper, we propose leveraging causal generative learning as an interpretable tool for explaining image classifiers. Specifically, we present a generative counterfactual inference approach to study the influence of visual features (pixels) as well as causal factors through generative learning. To this end, we first uncover the most influential pixels on a classifier’s decision by computing both Shapely and contrastive explanations for counterfactual images with different attribute values. We then establish a Monte Carlo mechanism using the generator of a causal generative model in order to adapt Shapley explainers to produce feature importances for the human-interpretable attributes of a causal dataset. This method is applied to the case where a classifier has been trained exclusively on the images of the causal dataset. Finally, we present optimization methods for creating counterfactual explanations of classifiers by means of counterfactual inference, proposing straightforward approaches for both differentiable and arbitrary classifiers. We exploit the Morpho-MNIST causal dataset as a case study for exploring our proposed methods for generating counterfactual explanations. However, our methods are applicable also to other causal datasets containing image data. We employ visual explanation methods from the OmnixAI open source toolkit to compare them with our proposed methods. By employing quantitative metrics to measure the interpretability of counterfactual explanations, we find that our proposed methods of counterfactual explanation offer more interpretable explanations compared to those generated from OmnixAI. This finding suggests that our methods are well-suited for generating highly interpretable counterfactual explanations on causal datasets.
ISSN:1433-7541
1433-755X
DOI:10.1007/s10044-024-01306-8