Redefining Retinal Lesion Segmentation: A Quantum Leap with DL-UNet Enhanced Auto Encoder-Decoder for Fundus Image Analysis
The first diagnosis of diabetic retinopathy (DR) must include lesion segmentation. As it takes a lot of time and effort to label lesions, automatic segmentation methods have to be created manually. The degree of the retina's degenerative lesions determines how severe diabetic retinopathy is. A...
Saved in:
Published in | IEEE access Vol. 11; p. 1 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Piscataway
IEEE
01.01.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | The first diagnosis of diabetic retinopathy (DR) must include lesion segmentation. As it takes a lot of time and effort to label lesions, automatic segmentation methods have to be created manually. The degree of the retina's degenerative lesions determines how severe diabetic retinopathy is. A major influence is on the early detection of illness and treatment of DR. To reliably identify the sites of related lesions and identify various abnormalities in retinal fundus pictures, deep learning algorithms are crucial. Additionally, utilizing patch-based analysis, a deep convolutional neural network is constructed. In this study, encoder-decoder neural networks along with channel-wise spatial Attention Mechanisms are proposed. The IDRiD dataset, which includes hard exudate segmentations, is used to train and evaluate the architecture. In this method, image patches are created using the sliding window technique. To determine the effectiveness of the recommended strategy, a thorough experiment was conducted on IDRiD. In order to predict the various sorts of lesions, the trained network analyses the picture patches and creates a probability map. This technique's efficacy and supremacy are confirmed by the expected accuracy of 99.94 %. The findings of this experiment show significantly enhanced performance in terms of accuracy when compared to prior research on comparable tasks. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2023.3294443 |