An Evaluation Toolkit to Guide Model Selection and Cohort Definition in Causal Inference
Real world observational data, together with causal inference, allow the estimation of causal effects when randomized controlled trials are not available. To be accepted into practice, such predictive models must be validated for the dataset at hand, and thus require a comprehensive evaluation toolk...
Saved in:
Main Authors | , , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
02.06.2019
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Real world observational data, together with causal inference, allow the
estimation of causal effects when randomized controlled trials are not
available. To be accepted into practice, such predictive models must be
validated for the dataset at hand, and thus require a comprehensive evaluation
toolkit, as introduced here. Since effect estimation cannot be evaluated
directly, we turn to evaluating the various observable properties of causal
inference, namely the observed outcome and treatment assignment. We developed a
toolkit that expands established machine learning evaluation methods and adds
several causal-specific ones. Evaluations can be applied in cross-validation,
in a train-test scheme, or on the training data. Multiple causal inference
methods are implemented within the toolkit in a way that allows modular use of
the underlying machine learning models. Thus, the toolkit is agnostic to the
machine learning model that is used. We showcase our approach using a
rheumatoid arthritis cohort (consisting of about 120K patients) extracted from
the IBM MarketScan(R) Research Database. We introduce an iterative pipeline of
data definition, model definition, and model evaluation. Using this pipeline,
we demonstrate how each of the evaluation components helps drive model
selection and refinement of data extraction criteria in a way that provides
more reproducible results and ensures that the causal question is answerable
with available data. Furthermore, we show how the evaluation toolkit can be
used to ensure that performance is maintained when applied to subsets of the
data, thus allowing exploration of questions that move towards personalized
medicine. |
---|---|
DOI: | 10.48550/arxiv.1906.00442 |