Confidence Interval Estimation of Predictive Performance in the Context of AutoML
Any supervised machine learning analysis is required to provide an estimate of the out-of-sample predictive performance. However, it is imperative to also provide a quantification of the uncertainty of this performance in the form of a confidence or credible interval (CI) and not just a point estima...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
12.06.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Any supervised machine learning analysis is required to provide an estimate
of the out-of-sample predictive performance. However, it is imperative to also
provide a quantification of the uncertainty of this performance in the form of
a confidence or credible interval (CI) and not just a point estimate. In an
AutoML setting, estimating the CI is challenging due to the ``winner's curse",
i.e., the bias of estimation due to cross-validating several machine learning
pipelines and selecting the winning one. In this work, we perform a comparative
evaluation of 9 state-of-the-art methods and variants in CI estimation in an
AutoML setting on a corpus of real and simulated datasets. The methods are
compared in terms of inclusion percentage (does a 95\% CI include the true
performance at least 95\% of the time), CI tightness (tighter CIs are
preferable as being more informative), and execution time. The evaluation is
the first one that covers most, if not all, such methods and extends previous
work to imbalanced and small-sample tasks. In addition, we present a variant,
called BBC-F, of an existing method (the Bootstrap Bias Correction, or BBC)
that maintains the statistical properties of the BBC but is more
computationally efficient. The results support that BBC-F and BBC dominate the
other methods in all metrics measured. |
---|---|
DOI: | 10.48550/arxiv.2406.08099 |