A Novel Black-Box Complementary Explanation Approach for Thorax Multi-Label Classification

Recently, Explainable Artificial Intelligence (XAI) has addressed a critical issue of applying deep learning models for clinical use by providing results that medical experts can understand in various medical contexts such as Thoracic disease classification. Most of the works that provide explainabi...

Full description

Saved in:
Bibliographic Details
Published in2024 4th International Conference on Innovative Research in Applied Science, Engineering and Technology (IRASET) pp. 1 - 8
Main Authors Bouabdallah, Khaled, Drif, Ahlem, Kaderali, Lars
Format Conference Proceeding
LanguageEnglish
Published IEEE 16.05.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Recently, Explainable Artificial Intelligence (XAI) has addressed a critical issue of applying deep learning models for clinical use by providing results that medical experts can understand in various medical contexts such as Thoracic disease classification. Most of the works that provide explainability are categorized as attribution-based approaches. These methods try to explain the output using the input features, which is insufficient for clinical use. However, the explainability based on the attribution methods is insufficient for medical experts since it does not provide how the essential features contribute to the output. This paper proposes a Discriminative Attention Guided Convolutional Neural Network framework (DAG-CNN) that relies on a case-based similarities approach for Thoracic disease diagnosis. The proposed framework for medical imaging diagnostic is based on the paradigm of the twin-system approach for finding example-based explanations. Our framework extracts global features of the whole image and local features of a lesion area corresponding to a specific disease. Then, it compares those features with the case-based diagnosis to extract the most similar cases as an explanation. Experiments performed on ChestX-ray14 yielded competitive results when compared to other state-of-the-art papers. DAG-CNN also shows the best results for pneumonia and hernia. In the explanation phase, we get a high score of 0.93 accuracy compared to similar cases. This result shows that the complementary explanation approach looks deeply into the discrepancies between outputted features and similar cases (images in the archive), which can be used for the diagnosis. The proposed framework provides an efficient similar image explanation based on important features contributing to the diagnosis.
DOI:10.1109/IRASET60544.2024.10548809