Explainable framework for Glaucoma diagnosis by image processing and convolutional neural network synergy: Analysis with doctor evaluation

Glaucoma causes blindness in long-time untreated cases. So, its early diagnosis is very important. Moving from that, there have been lots of Deep Learning oriented studies to diagnose Glaucoma from color fundus images. However, it is important to know about how humans can trust to black-box level De...

Full description

Saved in:
Bibliographic Details
Published inFuture generation computer systems Vol. 129; pp. 152 - 169
Main Authors Deperlioglu, Omer, Kose, Utku, Gupta, Deepak, Khanna, Ashish, Giampaolo, Fabio, Fortino, Giancarlo
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.04.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Glaucoma causes blindness in long-time untreated cases. So, its early diagnosis is very important. Moving from that, there have been lots of Deep Learning oriented studies to diagnose Glaucoma from color fundus images. However, it is important to know about how humans can trust to black-box level Deep Learning models in their decision makings. In this study, a hybrid solution with image processing and deep learning was supported with the Explainable Artificial Intelligence (XAI), to ensure a trustworthy decision-making for Glaucoma diagnosis. In detail, image processing employing both histogram equalization (HE) and contrast-limited adaptive HE (CLAHE) was used to enhance colored fundus image-data. For the diagnosis, the enhanced image-data was used by an explainable convolutional neural network (CNN). The XAI was achieved via Class Activation Mapping (CAM) allowing heat map-based explanations for the image analysis done by the CNN. The performance of the hybrid solution was tested with the Drishti-GS, ORIGA−Light and HRF retinal image datasets through a total of twenty attempts for classification. Considering the performance evaluation, the highest mean values were found with the ORIGA−Light dataset (with accuracy: 93.5%, sensitivity/ recall: 97.7%, specificity: 92.6%, precision: 93.8%, F1-Score: 95.7%, and AUC: 95.1%). As the XAI contribution of this study was with also an analysis by humans, the CAM based XAI effect was evaluated by some doctors. The CAM based XAI showed the accuracy of 82.73%, which was acceptable among alternative XAI methods. Also, according to the manual diagnosis test done with the doctors, the detections by the CAM based XAI was not below 97% and the worst diagnosis detection was 90%. Eventually, the results for the XAI effect were positive as pointing that use of XAI for black-box deep learning improves trust level for doctors. •An explainable CNN with Class Activation Mapping (CAM) was used for Glaucoma diagnosis.•Both histogram equalization and contrast-limited adaptive HE enhanced the input: fundus image-data.•Performance of the solution was tested over Drishti-GS, ORIGA-Light and HRF retinal image datasets.•The highest mean performance findings were obtained for the ORIGA-Light datasets.•The XAI contribution was positive as it improved the trust level of deep learning solution for doctors.
ISSN:0167-739X
1872-7115
DOI:10.1016/j.future.2021.11.018