DualEye-FeatureNet: A Dual-Stream Feature Transfer Framework for Multi-Modal Ophthalmic Image Classification

Eye diseases are a significant health issue due to the drastic increase in the use of digital gadgets and mobile devices, making early detection and intervention essential for effective treatment. In recent times, the multimodal imagery fusion approach has garnered growing interest in automated dise...

Full description

Saved in:
Bibliographic Details
Published inIEEE access Vol. 12; pp. 143985 - 144008
Main Authors Shafiq, Muhammad, Fan, Quanrun, Alghamedy, Fatemah H., Obidallah, Waeal J.
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Eye diseases are a significant health issue due to the drastic increase in the use of digital gadgets and mobile devices, making early detection and intervention essential for effective treatment. In recent times, the multimodal imagery fusion approach has garnered growing interest in automated disease detection for various eye disorders (Glaucoma, Cataracts, Diabetic Retinopathy (DR), Myopia, and Macular Degeneration (MD)). In this work, we propose a reliable, multi-modal, automated eye disease classification method using a novel fully automated DL framework called DualEye-FeatureNet. The proposed framework is a dual-stream deep learning architecture that combines complementary deep neural network models (DarkNet53 and ResNet101) with standard clustering techniques (Fuzzy C-means and K-means) to exploit features from OCT images and fundus images. The integrated form of two parallel stream of features is fed to the unique 3D-CNN for discrimination of eye diseases classification. Experimental results demonstrate the potential of the dual-stream model in capturing not only structural elements but also the spatial relationships of features in complex OCT and fundus images, effectively improving both performance and generalizability over state-of-art individual-modality approaches. The multi-modal ophthalmic image classification accuracies of 94% for Glaucoma, 92% of Cataracts, 95% for DR, 93% of Myopia and 91% for MD were obtained, respectively. The proposed architecture overcomes the limitation of single-modality diagnosis and significantly emerges as a novel fully automated deep learning framework.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2024.3469244