Optic Disc and Cup Segmentation Based on Deep Convolutional Generative Adversarial Networks

Glaucoma is a chronic eye disease that causes loss of vision and it is irreversible. Accurate segmentation of optic disc and optic cup is a basic step in screening glaucoma. The most existing deep convolutional neural network (DCNN) methods have insufficient feature information extraction, and hence...

Full description

Saved in:
Bibliographic Details
Published inIEEE access Vol. 7; pp. 64483 - 64493
Main Authors Jiang, Yun, Tan, Ning, Peng, Tingting
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Glaucoma is a chronic eye disease that causes loss of vision and it is irreversible. Accurate segmentation of optic disc and optic cup is a basic step in screening glaucoma. The most existing deep convolutional neural network (DCNN) methods have insufficient feature information extraction, and hence they are susceptible to pathological regions and low-quality images, with have poor ability to restore context information. Finally, the accuracy of the model segmentation is low. In this paper, we propose GL-Net, a multi-label DCNN model that combines the generative adversarial networks. GL-Net consists of two network structures including a Generator and a Discriminator. In the Generator, we use skip connection to promote the fusion of low-level feature information and high-level feature information, which alleviates the difficulty of restoring detailed feature information during upsampling, and reduces the downsampling factor, effectively alleviating excessive feature information loss. In the loss function, we add the <inline-formula> <tex-math notation="LaTeX">L_{1} </tex-math></inline-formula> distance function and the cross-entropy function to prevent the mode collapse when the model is trained, which makes the segmentation result more accurate. We use transfer learning and data augmentation to alleviate the problem of insufficient data and over-fitting of the model during training. Finally, GL-Net was verified on DRISHTI-GS1 dataset. The experimental results show that GL-Net outperforms some state-of-the-art method, such as M-Net, Stack-U-Net, RACE-net, and BCRF in terms of <inline-formula> <tex-math notation="LaTeX">F1 </tex-math></inline-formula> and boundary distance localization error (BLE). Particularly, in the optic cup segmentation, GL-Net outperforms RACE-net by 3.5% and 4.16 pixels in terms of <inline-formula> <tex-math notation="LaTeX">F1 </tex-math></inline-formula> and BLE, respectively.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2019.2917508