Minimax Rates of Estimation for High-Dimensional Linear Regression Over \ellq -Balls

Consider the high-dimensional linear regression model y = X β * + w , where y ∈ \BBR n is an observation vector, X ∈ \BBR n × d is a design matrix with d >; n , β * ∈ \BBR d is an unknown regression vector, and w ~ N (0, σ 2 I ) is additive Gaussian noise. This paper studies the minimax rates of...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on information theory Vol. 57; no. 10; pp. 6976 - 6994
Main Authors Raskutti, G., Wainwright, M. J., Bin Yu
Format Journal Article
LanguageEnglish
Published IEEE 01.10.2011
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Consider the high-dimensional linear regression model y = X β * + w , where y ∈ \BBR n is an observation vector, X ∈ \BBR n × d is a design matrix with d >; n , β * ∈ \BBR d is an unknown regression vector, and w ~ N (0, σ 2 I ) is additive Gaussian noise. This paper studies the minimax rates of convergence for estimating β * in either l 2 -loss and l 2 -prediction loss, assuming that β * belongs to an lq -ball \BBB q ( Rq ) for some q ∈ [0,1]. It is shown that under suitable regularity conditions on the design matrix X , the minimax optimal rate in l 2 -loss and l 2 -prediction loss scales as Θ( Rq ([(log d )/( n )]) 1-q / 2 ). The analysis in this paper reveals that conditions on the design matrix X enter into the rates for l 2 -error and l 2 -prediction error in complementary ways in the upper and lower bounds. Our proofs of the lower bounds are information theoretic in nature, based on Fano's inequality and results on the metric entropy of the balls \BBB q ( Rq ), whereas our proofs of the upper bounds are constructive, involving direct analysis of least squares over lq -balls. For the special case q =0, corresponding to models with an exact sparsity constraint, our results show that although computationally efficient l 1 -based methods can achieve the minimax rates up to constant factors, they require slightly stronger assumptions on the design matrix X than optimal algorithms involving least-squares over the l 0 -ball.
ISSN:0018-9448
1557-9654
DOI:10.1109/TIT.2011.2165799