Cross X-AI: Explainable Semantic Segmentation of Laparoscopic Images in Relation to Depth Estimation

In this work, two deep learning models, trained to segment the liver and perform depth reconstruction, are compared and analysed with their post-hoc explanation interplay. The first model (a U-Net) is designed to perform liver semantic segmentation over different subjects and scenarios. Particularly...

Full description

Saved in:
Bibliographic Details
Published inProceedings of ... International Joint Conference on Neural Networks pp. 1 - 8
Main Authors Bardozzo, Francesco, Priscoli, Mattia Delli, Collins, Toby, Forgione, Antonello, Hostettler, Alexandre, Tagliaferri, Roberto
Format Conference Proceeding
LanguageEnglish
Published IEEE 18.07.2022
Subjects
Online AccessGet full text
ISSN2161-4407
DOI10.1109/IJCNN55064.2022.9892345

Cover

More Information
Summary:In this work, two deep learning models, trained to segment the liver and perform depth reconstruction, are compared and analysed with their post-hoc explanation interplay. The first model (a U-Net) is designed to perform liver semantic segmentation over different subjects and scenarios. Particularly, the image pixels representing the liver are classified and separated by the surrounding pixels. Meanwhile, with the second model, a depth estimation is performed to regress the z-position of each pixel (relative depths). In general, these two models apply a sort of classification task which can be explained for each model individually and that can be combined to show additional relations and insights between the most relevant learned features. In detail, this work shows how post-hoc explainable AI systems (X-AI) based on Grad CAM and Grad CAM++ can be compared by introducing Cross X-AI (CX-AI). Typically the post-hoc explanation maps provide different visual explanations of their decisions based on the two proposed approaches. Our results show that the Grad Cam++ segmentation explanation maps present cross-learning strategies similar to disparity explanations (and vice versa).
ISSN:2161-4407
DOI:10.1109/IJCNN55064.2022.9892345