Measuring the prediction error. A comparison of cross-validation, bootstrap and covariance penalty methods

The estimators most widely used to evaluate the prediction error of a non-linear regression model are examined. An extensive simulation approach allowed the comparison of the performance of these estimators for different non-parametric methods, and with varying signal-to-noise ratio and sample size....

Full description

Saved in:
Bibliographic Details
Published inComputational statistics & data analysis Vol. 54; no. 12; pp. 2976 - 2989
Main Authors Borra, Simone, Di Ciaccio, Agostino
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.12.2010
Elsevier
SeriesComputational Statistics & Data Analysis
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The estimators most widely used to evaluate the prediction error of a non-linear regression model are examined. An extensive simulation approach allowed the comparison of the performance of these estimators for different non-parametric methods, and with varying signal-to-noise ratio and sample size. Estimators based on resampling methods such as Leave-one-out, parametric and non-parametric Bootstrap, as well as repeated Cross Validation methods and Hold-out, were considered. The methods used are Regression Trees, Projection Pursuit Regression and Neural Networks. The repeated-corrected 10-fold Cross-Validation estimator and the Parametric Bootstrap estimator obtained the best performance in the simulations.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 23
ISSN:0167-9473
1872-7352
DOI:10.1016/j.csda.2010.03.004