Calibration after bootstrap for accurate uncertainty quantification in regression models

Abstract Obtaining accurate estimates of machine learning model uncertainties on newly predicted data is essential for understanding the accuracy of the model and whether its predictions can be trusted. A common approach to such uncertainty quantification is to estimate the variance from an ensemble...

Full description

Saved in:
Bibliographic Details
Published innpj computational materials Vol. 8; no. 1; pp. 1 - 9
Main Authors Palmer, Glenn, Du, Siqi, Politowicz, Alexander, Emory, Joshua Paul, Yang, Xiyu, Gautam, Anupraas, Gupta, Grishma, Li, Zhelong, Jacobs, Ryan, Morgan, Dane
Format Journal Article
LanguageEnglish
Published London Nature Publishing Group 20.05.2022
Nature Portfolio
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Abstract Obtaining accurate estimates of machine learning model uncertainties on newly predicted data is essential for understanding the accuracy of the model and whether its predictions can be trusted. A common approach to such uncertainty quantification is to estimate the variance from an ensemble of models, which are often generated by the generally applicable bootstrap method. In this work, we demonstrate that the direct bootstrap ensemble standard deviation is not an accurate estimate of uncertainty but that it can be simply calibrated to dramatically improve its accuracy. We demonstrate the effectiveness of this calibration method for both synthetic data and numerous physical datasets from the field of Materials Science and Engineering. The approach is motivated by applications in physical and biological science but is quite general and should be applicable for uncertainty quantification in a wide range of machine learning regression models.
ISSN:2057-3960
2057-3960
DOI:10.1038/s41524-022-00794-8