Disease localization and its prediction from retinal fundus images using explicitly designed deep learning architecture

  Visual disability is increasing due to the incidence of diabetic retinopathy (DR), but timely detection and diagnosis can provide more treatment options and a greater chance of patient survival. Retinal imaging is used for screening and timely detection of the disease. However, determining the exa...

Full description

Saved in:
Bibliographic Details
Published inMultimedia tools and applications Vol. 83; no. 10; pp. 28461 - 28478
Main Authors Kumari, Pammi, Saxena, Priyank
Format Journal Article
LanguageEnglish
Published New York Springer US 01.03.2024
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:  Visual disability is increasing due to the incidence of diabetic retinopathy (DR), but timely detection and diagnosis can provide more treatment options and a greater chance of patient survival. Retinal imaging is used for screening and timely detection of the disease. However, determining the exact stage of DR from color retinal fundus images is challenging because they consist of non-uniform lesions with indeterminate extremities. Hence, most of the pre-trained models used in earlier studies to classify retinal images couldn't deliver as expected because these models failed to understand the intricacies of the retinal images. Thus, this work addresses the classification of retinal images (S-0 to S-4) according to the extent of DR using a convolutional neural network (RINet) designed explicitly for retinal fundus images obtained from the APTOS dataset. To improve the classification performance of RINet, features from intermediate layers are extracted, which aid in improvising the model parameters as they indicate the actual state of the model. These extracted features are represented using layer visualization. For disease localization, Gradient-weighted Class Activation Mapping (Grad-CAM) is incorporated at the last convolutional layer, effectively highlighting the crucial regions for every stage in the image. In-depth ablation tests are conducted to realize the current form of the RINet and test its effectiveness. RINet achieves an accuracy of 85% for multi-stage and 95% for binary class on the test set. The simulation results show that the RINet outweighs the pre-trained models, particularly in moderate to severe DR stages.
ISSN:1573-7721
1380-7501
1573-7721
DOI:10.1007/s11042-023-16585-2