Bootstrapping to Assess and Improve Atmospheric Prediction Models

Bootstrapping is a simple technique typically used to assess accuracy of estimates of model parameters by using simple plug-in principles and replacing sometimes unwieldy theory by computer simulation. Common uses include variance estimation and confidence interval construction of model parameters....

Full description

Saved in:
Bibliographic Details
Published inData mining and knowledge discovery Vol. 4; no. 1; p. 29
Main Author J. Sunil Rao
Format Journal Article
LanguageEnglish
Published New York Springer Nature B.V 01.04.2000
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Bootstrapping is a simple technique typically used to assess accuracy of estimates of model parameters by using simple plug-in principles and replacing sometimes unwieldy theory by computer simulation. Common uses include variance estimation and confidence interval construction of model parameters. It also provides a way to estimate prediction accuracy of continuous and class-valued outcomes regression models. In this paper we will overview some of these applications of the bootstrap focusing on bootstrap estimates of prediction error, and also explore how the bootstrap can be used to improve prediction accuracy of unstable models like tree-structured classifiers through aggregation. The improvements can typically be attributed to variance reduction in the classical regression setting and more generally a smoothing of decision boundaries for the classification setting. These advancements have important implications in the way that atmospheric prediction models can be improved, and illustrations of this will be shown. For class-valued outcomes, an interesting graphic known as the CAT scan can be constructed to help understand the aggregated decision boundary. This will be illustrated using simulated data.
ISSN:1384-5810
1573-756X
DOI:10.1023/A:1009876615946