Telemedical diagnosis of retinopathy of prematurity: accuracy of expert versus non-expert graders

Background/aimsTo assess accuracy of telemedical retinopathy of prematurity (ROP) diagnosis by trained non-expert graders compared with expert graders.MethodsEye examinations (n=248) from 67 consecutive infants were captured using wide-angle retinal photography (RetCam-II, Clarity Medical Systems, P...

Full description

Saved in:
Bibliographic Details
Published inBritish journal of ophthalmology Vol. 94; no. 3; pp. 351 - 356
Main Authors Williams, Steven L, Wang, Lu, Kane, Steven A, Lee, Thomas C, Weissgold, David J, Berrocal, Audina M, Rabinowitz, Daniel, Starren, Justin, Flynn, John T, Chiang, Michael F
Format Journal Article
LanguageEnglish
Published BMA House, Tavistock Square, London, WC1H 9JR BMJ Publishing Group Ltd 01.03.2010
BMJ Publishing Group
BMJ Publishing Group LTD
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Background/aimsTo assess accuracy of telemedical retinopathy of prematurity (ROP) diagnosis by trained non-expert graders compared with expert graders.MethodsEye examinations (n=248) from 67 consecutive infants were captured using wide-angle retinal photography (RetCam-II, Clarity Medical Systems, Pleasanton, California, USA). Non-expert graders attended two 1-h training sessions on image-based ROP diagnosis. Using a web-based telemedicine system, 14 non-expert and three expert graders provided a diagnosis for each eye: no ROP, mild ROP, type 2 pre-threshold ROP or treatment-requiring ROP. All diagnoses were compared with a reference standard of dilated indirect ophthalmoscopy by an experienced paediatric ophthalmologist.ResultsFor detection of type 2 or worse ROP, the mean (range) sensitivities and specificities were 0.95 (0.94–0.97) and 0.93 (0.91–0.96) for experts, 0.87 (0.71–0.97) and 0.73 (0.39–0.95) for resident non-experts, and 0.73 (0.41–0.88) and 0.91 (0.84–0.96) for student non-experts, respectively. For detection of treatment-requiring ROP, the mean (range) sensitivities and specificities were 1.00 (1.00–1.00) and 0.93 (0.88–0.96) for experts, 0.88 (0.50–1.00) and 0.84 (0.71–0.98) for resident non-experts, and 0.82 (0.42–1.00) and 0.92 (0.83–0.97) for student non-experts, respectively.ConclusionsMean sensitivity and specificity of trained non-experts were lower than that of experts, although several non-experts had high accuracy. Development of methods for training non-expert graders may help support telemedical ROP evaluation.
Bibliography:href:bjophthalmol-94-351.pdf
PMID:19955195
ark:/67375/NVC-MZL6L0SH-W
ArticleID:bjo166348
local:bjophthalmol;94/3/351
istex:DDD5661110C48B371AD723B51917CE55A7D69F19
ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Undefined-1
ObjectType-Feature-3
content type line 23
ISSN:0007-1161
1468-2079
DOI:10.1136/bjo.2009.166348