Ensemble Image Explainable AI (XAI) Algorithm for Severe Community-Acquired Pneumonia and COVID-19 Respiratory Infections

Since the onset of the COVID-19 pandemic in 2019, many clinical prognostic scoring tools have been proposed or developed to aid clinicians in the disposition and severity assessment of pneumonia. However, there is limited work that focuses on explaining techniques that are best suited for clinicians...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on artificial intelligence Vol. 4; no. 2; pp. 242 - 254
Main Authors Zou, Lin, Goh, Han Leong, Liew, Charlene Jin Yee, Quah, Jessica Lishan, Gu, Gary Tianyu, Chew, Jun Jie, Kumar, Mukundaram Prem, Ang, Christine Gia Lee, Ta, Andy Wee An
Format Journal Article
LanguageEnglish
Published IEEE 01.04.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Since the onset of the COVID-19 pandemic in 2019, many clinical prognostic scoring tools have been proposed or developed to aid clinicians in the disposition and severity assessment of pneumonia. However, there is limited work that focuses on explaining techniques that are best suited for clinicians in their decision making. In this article, we present a new image explainability method named ensemble AI explainability (XAI), which is based on the SHAP and Grad-CAM++ methods. It provides a visual explanation for a deep learning prognostic model that predicts the mortality risk of community-acquired pneumonia and COVID-19 respiratory infected patients. In addition, we surveyed the existing literature and compiled prevailing quantitative and qualitative metrics to systematically review the efficacy of ensemble XAI, and to make comparisons with several state-of-the-art explainability methods (LIME, SHAP, saliency map, Grad-CAM, Grad-CAM++). Our quantitative experimental results have shown that ensemble XAI has a comparable absence impact (decision impact: 0.72, confident impact: 0.24). Our qualitative experiment, in which a panel of three radiologists were involved to evaluate the degree of concordance and trust in the algorithms, has showed that ensemble XAI has localization effectiveness (mean set accordance precision: 0.52, mean set accordance recall: 0.57, mean set <inline-formula><tex-math notation="LaTeX">{{\bm{F}}_1}</tex-math></inline-formula>: 0.50, mean set IOU: 0.36) and is the most trusted method by the panel of radiologists (mean vote: 70.2%). Finally, the deep learning interpretation dashboard used for the radiologist panel voting will be made available to the community. Our code is available at https://github.com/IHIS-HealthInsights/Interpretation-Methods-Voting-dashboard .
ISSN:2691-4581
2691-4581
DOI:10.1109/TAI.2022.3153754