Shrinkage estimation of large covariance matrices: Keep it simple, statistician?

Under rotation-equivariant decision theory, sample covariance matrix eigenvalues can be optimally shrunk by recombining sample eigenvectors with a (potentially nonlinear) function of the unobservable population covariance matrix. The optimal shape of this function reflects the loss/risk that is to b...

Full description

Saved in:
Bibliographic Details
Published inJournal of multivariate analysis Vol. 186; p. 104796
Main Authors Ledoit, Olivier, Wolf, Michael
Format Journal Article
LanguageEnglish
Published Elsevier Inc 01.11.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Under rotation-equivariant decision theory, sample covariance matrix eigenvalues can be optimally shrunk by recombining sample eigenvectors with a (potentially nonlinear) function of the unobservable population covariance matrix. The optimal shape of this function reflects the loss/risk that is to be minimized. We solve the problem of optimal covariance matrix estimation under a variety of loss functions motivated by statistical precedent, probability theory, and differential geometry. A key ingredient of our nonlinear shrinkage methodology is a new estimator of the angle between sample and population eigenvectors, without making strong assumptions on the population eigenvalues. We also introduce a broad family of covariance matrix estimators that can handle all regular functional transformations of the population covariance matrix under large-dimensional asymptotics. In addition, we compare via Monte Carlo simulations our methodology to two simpler ones from the literature, linear shrinkage and shrinkage based on the spiked covariance model.
ISSN:0047-259X
1095-7243
DOI:10.1016/j.jmva.2021.104796