Comparing six shrinkage estimators with large sample theory and asymptotically optimal prediction intervals

Consider the multiple linear regression model Y = β 1 + β 2 x 2 + ⋯ + β p x p + e = x T β + e with sample size n . This paper compares the six shrinkage estimators: forward selection, lasso, partial least squares, principal components regression, lasso variable selection, and ridge regression, with...

Full description

Saved in:
Bibliographic Details
Published inStatistical papers (Berlin, Germany) Vol. 62; no. 5; pp. 2407 - 2431
Main Authors Watagoda, Lasanthi C. R. Pelawa, Olive, David J.
Format Journal Article
LanguageEnglish
Published Berlin/Heidelberg Springer Berlin Heidelberg 01.10.2021
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Consider the multiple linear regression model Y = β 1 + β 2 x 2 + ⋯ + β p x p + e = x T β + e with sample size n . This paper compares the six shrinkage estimators: forward selection, lasso, partial least squares, principal components regression, lasso variable selection, and ridge regression, with large sample theory and two new prediction intervals that are asymptotically optimal if the estimator β ^ is a consistent estimator of β . Few prediction intervals have been developed for p > n , and they are not asymptotically optimal. For p fixed, the large sample theory for variable selection estimators like forward selection is new, and the theory shows that lasso variable selection is n consistent under much milder conditions than lasso. This paper also simplifies the proofs of the large sample theory for lasso, ridge regression, and elastic net.
ISSN:0932-5026
1613-9798
DOI:10.1007/s00362-020-01193-1