Patch-Based Output Space Adversarial Learning for Joint Optic Disc and Cup Segmentation

Glaucoma is a leading cause of irreversible blindness. Accurate segmentation of the optic disc (OD) and optic cup (OC) from fundus images is beneficial to glaucoma screening and diagnosis. Recently, convolutional neural networks demonstrate promising progress in the joint OD and OC segmentation. How...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on medical imaging Vol. 38; no. 11; pp. 2485 - 2495
Main Authors Wang, Shujun, Yu, Lequan, Yang, Xin, Fu, Chi-Wing, Heng, Pheng-Ann
Format Journal Article
LanguageEnglish
Published United States IEEE 01.11.2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Glaucoma is a leading cause of irreversible blindness. Accurate segmentation of the optic disc (OD) and optic cup (OC) from fundus images is beneficial to glaucoma screening and diagnosis. Recently, convolutional neural networks demonstrate promising progress in the joint OD and OC segmentation. However, affected by the domain shift among different datasets, deep networks are severely hindered in generalizing across different scanners and institutions. In this paper, we present a novel patch-based output space adversarial learning framework (pOSAL) to jointly and robustly segment the OD and OC from different fundus image datasets. We first devise a lightweight and efficient segmentation network as a backbone. Considering the specific morphology of OD and OC, a novel morphology-aware segmentation loss is proposed to guide the network to generate accurate and smooth segmentation. Our pOSAL framework then exploits unsupervised domain adaptation to address the domain shift challenge by encouraging the segmentation in the target domain to be similar to the source ones. Since the whole-segmentation-based adversarial loss is insufficient to drive the network to capture segmentation details, we further design the pOSAL in a patch-based fashion to enable fine-grained discrimination on local segmentation details. We extensively evaluate our pOSAL framework and demonstrate its effectiveness in improving the segmentation performance on three public retinal fundus image datasets, i.e., Drishti-GS, RIM-ONE-r3, and REFUGE. Furthermore, our pOSAL framework achieved the first place in the OD and OC segmentation tasks in the MICCAI 2018 Retinal Fundus Glaucoma Challenge.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0278-0062
1558-254X
DOI:10.1109/TMI.2019.2899910