Man or machine: Multi-institutional evaluation of automated chart review

Introduction Obtaining valuable clinical information from the free text of medical records currently requires labor-intensive manual abstraction. With electronic medical records (EMR), variables in free text are accessible with automated retrieval, increasing the data available for research. We aime...

Full description

Saved in:
Bibliographic Details
Published inJournal of the American College of Surgeons Vol. 213; no. 3; p. S107
Main Authors Dodgion, Christopher M., MD, MSPH, Nguyen, Thien M., BS, Karcz, Anita, MD, MBA, Hu, Yue-Yung, MD, MPH, Jiang, Wei, MS, Corso, Katherine A., MPH, Lipsitz, Stuart R., SCD, D'Avolio, Leonard W., PhD, Greenberg, Caprice C., MD, MPH, FACS
Format Journal Article
LanguageEnglish
Published Elsevier Inc 2011
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Introduction Obtaining valuable clinical information from the free text of medical records currently requires labor-intensive manual abstraction. With electronic medical records (EMR), variables in free text are accessible with automated retrieval, increasing the data available for research. We aimed to evaluate the feasibility of automated isolation and categorization of breast cancer patients' pathology and operative reports in a multi-institutional cohort. Methods 6,037 patients underwent an operation for breast cancer at 66 hospitals from 2007-2008. 103,106 free text reports were isolated from these patients' EMRs. Two surgical abstractors classified 1597 randomly selected documents as breast cancer operative notes, pathology reports and not relevant. This reference set was used to train and test the automated retrieval console (ARC). The ARC model was chosen to maximize sensitivity. Agreement between two chart categorizations (manual vs. manual or manual vs. ARC) was assessed using Cohen's kappa. Results Overall, operative note, and pathology report manual abstractor agreement (kappa) was 0.92 (95% CI 0.89-0.95), 0.91 (95% CI 0.87-0.95) and 0.96 (95% CI 0.91-1) respectively. 152/1597 (9.5%) disagreements required adjudication by a surgical oncologist. ARC agreement with the adjudicated abstraction (kappa) was 0.88 (95% CI 0.84-0.92) overall and 0.86 (95% CI 0.82-0.91) for operative notes and 0.93 (95% CI 0.87-0.99) for pathology reports. Conclusions ARC is nearly as reliable as manual chart review. This open source software can be used to categorize disease-specific free text reports across multiple EMR platforms. Utilizing ARC can facilitate rapid multi-institutional chart review for health services and clinical research.
ISSN:1072-7515
1879-1190
DOI:10.1016/j.jamcollsurg.2011.06.249