DeepSeeNet: A Deep Learning Model for Automated Classification of Patient-based Age-related Macular Degeneration Severity from Color Fundus Photographs

In assessing the severity of age-related macular degeneration (AMD), the Age-Related Eye Disease Study (AREDS) Simplified Severity Scale predicts the risk of progression to late AMD. However, its manual use requires the time-consuming participation of expert practitioners. Although several automated...

Full description

Saved in:
Bibliographic Details
Published inOphthalmology (Rochester, Minn.) Vol. 126; no. 4; pp. 565 - 575
Main Authors Peng, Yifan, Dharssi, Shazia, Chen, Qingyu, Keenan, Tiarnan D., Agrón, Elvira, Wong, Wai T., Chew, Emily Y., Lu, Zhiyong
Format Journal Article
LanguageEnglish
Published United States Elsevier Inc 01.04.2019
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In assessing the severity of age-related macular degeneration (AMD), the Age-Related Eye Disease Study (AREDS) Simplified Severity Scale predicts the risk of progression to late AMD. However, its manual use requires the time-consuming participation of expert practitioners. Although several automated deep learning systems have been developed for classifying color fundus photographs (CFP) of individual eyes by AREDS severity score, none to date has used a patient-based scoring system that uses images from both eyes to assign a severity score. DeepSeeNet, a deep learning model, was developed to classify patients automatically by the AREDS Simplified Severity Scale (score 0–5) using bilateral CFP. DeepSeeNet was trained on 58 402 and tested on 900 images from the longitudinal follow-up of 4549 participants from AREDS. Gold standard labels were obtained using reading center grades. DeepSeeNet simulates the human grading process by first detecting individual AMD risk factors (drusen size, pigmentary abnormalities) for each eye and then calculating a patient-based AMD severity score using the AREDS Simplified Severity Scale. Overall accuracy, specificity, sensitivity, Cohen’s kappa, and area under the curve (AUC). The performance of DeepSeeNet was compared with that of retinal specialists. DeepSeeNet performed better on patient-based classification (accuracy = 0.671; kappa = 0.558) than retinal specialists (accuracy = 0.599; kappa = 0.467) with high AUC in the detection of large drusen (0.94), pigmentary abnormalities (0.93), and late AMD (0.97). DeepSeeNet also outperformed retinal specialists in the detection of large drusen (accuracy 0.742 vs. 0.696; kappa 0.601 vs. 0.517) and pigmentary abnormalities (accuracy 0.890 vs. 0.813; kappa 0.723 vs. 0.535) but showed lower performance in the detection of late AMD (accuracy 0.967 vs. 0.973; kappa 0.663 vs. 0.754). By simulating the human grading process, DeepSeeNet demonstrated high accuracy with increased transparency in the automated assignment of individual patients to AMD risk categories based on the AREDS Simplified Severity Scale. These results highlight the potential of deep learning to assist and enhance clinical decision-making in patients with AMD, such as early AMD detection and risk prediction for developing late AMD. DeepSeeNet is publicly available on https://github.com/ncbi-nlp/DeepSeeNet.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 23
Data collection: Peng, Dharssi, Chen, Agron, Wong, Chew
Conception and design: Peng, Dharssi, Chew, Lu
Analysis and interpretation: Peng, Dharssi, Chen, Keenan, Agron, Wong, Chew, Lu
Authorship
Drafting the work: Peng, Dharssi, Chen, Keenan, Wong, Chew, Lu
ISSN:0161-6420
1549-4713
1549-4713
DOI:10.1016/j.ophtha.2018.11.015