Predicting postoperative chronic opioid use with fair machine learning models integrating multi-modal data sources: a demonstration of ethical machine learning in healthcare
Building upon our previous work on predicting chronic opioid use using electronic health records (EHR) and wearable data, this study leveraged the Health Equity Across the AI Lifecycle (HEAAL) framework to (a) fine tune the previously built model with genomic data and evaluate model performance in p...
Saved in:
Published in | Journal of the American Medical Informatics Association : JAMIA Vol. 32; no. 6; pp. 985 - 997 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
England
01.06.2025
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Building upon our previous work on predicting chronic opioid use using electronic health records (EHR) and wearable data, this study leveraged the Health Equity Across the AI Lifecycle (HEAAL) framework to (a) fine tune the previously built model with genomic data and evaluate model performance in predicting chronic opioid use and (b) apply IBM's AIF360 pre-processing toolkit to mitigate bias related to gender and race and evaluate the model performance using various fairness metrics.
Participants included approximately 271 All of Us Research Program subjects with EHR, wearable, and genomic data. We fine-tuned 4 machine learning models on the new dataset. The SHapley Additive exPlanations (SHAP) technique identified the best-performing predictors. A preprocessing toolkit boosted fairness by gender and race.
The genetic data enhanced model performance from the prior model, with the area under the curve improving from 0.90 (95% CI, 0.88-0.92) to 0.95 (95% CI, 0.89-0.95). Key predictors included Dopamine D1 Receptor (DRD1) rs4532, general type of surgery, and time spent in physical activity. The reweighing preprocessing technique applied to the stacking algorithm effectively improved the model's fairness across racial and gender groups without compromising performance.
We leveraged 2 dimensions of the HEAAL framework to build a fair artificial intelligence (AI) solution. Multi-modal datasets (including wearable and genetic data) and applying bias mitigation strategies can help models to more fairly and accurately assess risk across diverse populations, promoting fairness in AI in healthcare. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ISSN: | 1067-5027 1527-974X 1527-974X |
DOI: | 10.1093/jamia/ocaf053 |