The Dantzig selector: Statistical estimation when $p$ is much larger than $n
Annals of Statistics 2007, Vol. 35, No. 6, 2313-2351 In many important statistical applications, the number of variables or parameters $p$ is much larger than the number of observations $n$. Suppose then that we have observations $y=X\beta+z$, where $\beta\in\mathbf{R}^p$ is a parameter vector of in...
Saved in:
Main Authors | , |
---|---|
Format | Journal Article |
Language | English |
Published |
04.06.2005
|
Subjects | |
Online Access | Get full text |
DOI | 10.48550/arxiv.math/0506081 |
Cover
Summary: | Annals of Statistics 2007, Vol. 35, No. 6, 2313-2351 In many important statistical applications, the number of variables or
parameters $p$ is much larger than the number of observations $n$. Suppose then
that we have observations $y=X\beta+z$, where $\beta\in\mathbf{R}^p$ is a
parameter vector of interest, $X$ is a data matrix with possibly far fewer rows
than columns, $n\ll p$, and the $z_i$'s are i.i.d. $N(0,\sigma^2)$. Is it
possible to estimate $\beta$ reliably based on the noisy data $y$? To estimate
$\beta$, we introduce a new estimator--we call it the Dantzig selector--which
is a solution to the $\ell_1$-regularization problem \[\min_{\tilde{\b
eta}ın\mathbf{R}^p}\|\tilde{\beta}\|_{\ell_1}\quad subject to\quad
\|X^*r\|_{\ell_{ınfty}}\leq(1+t^{-1})\sqrt{2\log p}\cdot\sigma,\] where $r$ is
the residual vector $y-X\tilde{\beta}$ and $t$ is a positive scalar. We show
that if $X$ obeys a uniform uncertainty principle (with unit-normed columns)
and if the true parameter vector $\beta$ is sufficiently sparse (which here
roughly guarantees that the model is identifiable), then with very large
probability, \[\|\hat{\beta}-\beta\|_{\ell_2}^2\le C^2\cdot2\log p\cdot
\Biggl(\sigma^2+\sum_i\min(\beta_i^2,\sigma^2)\Biggr).\] Our results are
nonasymptotic and we give values for the constant $C$. Even though $n$ may be
much smaller than $p$, our estimator achieves a loss within a logarithmic
factor of the ideal mean squared error one would achieve with an oracle which
would supply perfect information about which coordinates are nonzero, and which
were above the noise level. In multivariate regression and from a model
selection viewpoint, our result says that it is possible nearly to select the
best subset of variables by solving a very simple convex program, which, in
fact, can easily be recast as a convenient linear program (LP). |
---|---|
Bibliography: | IMS-AOS-AOS0204 |
DOI: | 10.48550/arxiv.math/0506081 |