Exploration of Interpretability Techniques for Deep COVID-19 Classification Using Chest X-ray Images

The outbreak of COVID-19 has shocked the entire world with its fairly rapid spread, and has challenged different sectors. One of the most effective ways to limit its spread is the early and accurate diagnosing of infected patients. Medical imaging, such as X-ray and computed tomography (CT), combine...

Full description

Saved in:
Bibliographic Details
Published inJournal of imaging Vol. 10; no. 2; p. 45
Main Authors Chatterjee, Soumick, Saad, Fatima, Sarasaen, Chompunuch, Ghosh, Suhita, Krug, Valerie, Khatun, Rupali, Mishra, Rahul, Desai, Nirja, Radeva, Petia, Rose, Georg, Stober, Sebastian, Speck, Oliver, Nürnberger, Andreas
Format Journal Article
LanguageEnglish
Published Switzerland MDPI AG 01.02.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The outbreak of COVID-19 has shocked the entire world with its fairly rapid spread, and has challenged different sectors. One of the most effective ways to limit its spread is the early and accurate diagnosing of infected patients. Medical imaging, such as X-ray and computed tomography (CT), combined with the potential of artificial intelligence (AI), plays an essential role in supporting medical personnel in the diagnosis process. Thus, in this article, five different deep learning models (ResNet18, ResNet34, InceptionV3, InceptionResNetV2, and DenseNet161) and their ensemble, using majority voting, have been used to classify COVID-19, pneumoniæ and healthy subjects using chest X-ray images. Multilabel classification was performed to predict multiple pathologies for each patient, if present. Firstly, the interpretability of each of the networks was thoroughly studied using local interpretability methods—occlusion, saliency, input X gradient, guided backpropagation, integrated gradients, and DeepLIFT—and using a global technique—neuron activation profiles. The mean micro F1 score of the models for COVID-19 classifications ranged from 0.66 to 0.875, and was 0.89 for the ensemble of the network models. The qualitative results showed that the ResNets were the most interpretable models. This research demonstrates the importance of using interpretability methods to compare different models before making a decision regarding the best performing model.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:2313-433X
2313-433X
DOI:10.3390/jimaging10020045