Validation of a Natural Language Machine Learning Model for Safety Literature Surveillance

Introduction As part of routine safety surveillance, thousands of articles of potential interest are manually triaged for review by safety surveillance teams. This manual triage task is an interesting candidate for automation based on the abundance of process data available for training, the perform...

Full description

Saved in:
Bibliographic Details
Published inDrug safety Vol. 47; no. 1; pp. 71 - 80
Main Authors Park, Jiyoon, Djelassi, Malek, Chima, Daniel, Hernandez, Robert, Poroshin, Vladimir, Iliescu, Ana-Maria, Domalik, Douglas, Southall, Noel
Format Journal Article
LanguageEnglish
Published Cham Springer International Publishing 2024
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Introduction As part of routine safety surveillance, thousands of articles of potential interest are manually triaged for review by safety surveillance teams. This manual triage task is an interesting candidate for automation based on the abundance of process data available for training, the performance of natural language processing algorithms for this type of cognitive task, and the small number of safety signals that originate from literature review, resulting in its lower risk profile. However, deep learning algorithms introduce unique risks and the validation of such models for use in Good Pharmacovigilance Practice remains an open question. Objective Qualifying an automated, deep learning approach to literature surveillance for use at AstraZeneca. Methods The study is a prospective validation of a literature surveillance triage model, comparing its real-world performance with that of human surveillance teams working in parallel. The biggest risk in modifying this triage process is missing a safety signal (resulting in model false negatives) and hence model recall is the main evaluation metric considered. Results The model demonstrates consistent global performance from training through testing, with recall rates comparable to that of existing surveillance teams. The model is accepted for use specifically for those products where non-inferiority to the manual process is rigorously demonstrated. Conclusion Characterizing model performance prospectively, under real-world conditions, allows us to thoroughly examine model consistency and failure modes, qualifying it for use in our surveillance processes. We also identify potential future improvements and recognize the opportunity for the community to collaborate on this shared task.
ISSN:0114-5916
1179-1942
DOI:10.1007/s40264-023-01367-4