Ensembling to Leverage the Interpretability of Medical Image Analysis Systems

Along with the increase in the accuracy of artificial intelligence systems, complexity has also risen. Despite high accuracy, high-risk decision-making requires explanations about the model's decision, which often take the form of saliency maps. This work examines the efficacy of ensembling dee...

Full description

Saved in:
Bibliographic Details
Published inIEEE access Vol. 11; p. 1
Main Authors Zafeiriou, Argyrios, Kallipolitis, Athanasios, Maglogiannis, Ilias
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 01.01.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Along with the increase in the accuracy of artificial intelligence systems, complexity has also risen. Despite high accuracy, high-risk decision-making requires explanations about the model's decision, which often take the form of saliency maps. This work examines the efficacy of ensembling deep convolutional neural networks to leverage explanations, under the concept that ensemble models are combinatory informed. A novel approach is presented for aggregating saliency maps derived from multiple base models, as an alternative way of combining the different perspectives that several competent models offer. The proposed methodology lowers computation costs, while allowing for the combinations of maps of various origins. Following a saliency map evaluation scheme, four tests are performed over three image datasets, two medical image datasets and one generic. The results suggest that interpretability is improved by combining information through the aggregation scheme. The discussion that follows provides insights into the inner workings behind the results, such as the specific combination of the interpretability and ensemble methods, and offers useful suggestions for future work.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2023.3291610