Precise Error Analysis of Regularized M -Estimators in High Dimensions

A popular approach for estimating an unknown signal <inline-formula> <tex-math notation="LaTeX"> \mathbf {x}_{0}\in \mathbb {R} ^{n} </tex-math></inline-formula> from noisy, linear measurements <inline-formula> <tex-math notation="LaTeX"> \math...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on information theory Vol. 64; no. 8; pp. 5592 - 5628
Main Authors Thrampoulidis, Christos, Abbasi, Ehsan, Hassibi, Babak
Format Journal Article
LanguageEnglish
Published IEEE 01.08.2018
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:A popular approach for estimating an unknown signal <inline-formula> <tex-math notation="LaTeX"> \mathbf {x}_{0}\in \mathbb {R} ^{n} </tex-math></inline-formula> from noisy, linear measurements <inline-formula> <tex-math notation="LaTeX"> \mathbf {y}= \mathbf {A} \mathbf {x} _{0}+ \mathbf {z}\in \mathbb {R}^{m} </tex-math></inline-formula> is via solving a so called regularized <inline-formula> <tex-math notation="LaTeX">M </tex-math></inline-formula>-estimator: <inline-formula> <tex-math notation="LaTeX">\hat{\mathbf {x}} :=\arg \min _ \mathbf {x} \mathcal {L} (\mathbf {y}- \mathbf {A} \mathbf {x})+\lambda f(\mathbf {x}) </tex-math></inline-formula>. Here, <inline-formula> <tex-math notation="LaTeX"> \mathcal {L} </tex-math></inline-formula> is a convex loss function, <inline-formula> <tex-math notation="LaTeX">f </tex-math></inline-formula> is a convex (typically, non-smooth) regularizer, and <inline-formula> <tex-math notation="LaTeX">\lambda > 0 </tex-math></inline-formula> is a regularizer parameter. We analyze the squared error performance <inline-formula> <tex-math notation="LaTeX">\|\hat{\mathbf {x}} - \mathbf {x}_{0}\|_{2}^{2} </tex-math></inline-formula> of such estimators in the high-dimensional proportional regime where <inline-formula> <tex-math notation="LaTeX">m,n\rightarrow \infty </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">m/n\rightarrow \delta </tex-math></inline-formula>. The design matrix <inline-formula> <tex-math notation="LaTeX"> \mathbf {A} </tex-math></inline-formula> is assumed to have entries iid Gaussian; only minimal and rather mild regularity conditions are imposed on the loss function, the regularizer, and on the noise and signal distributions. We show that the squared error converges in probability to a nontrivial limit that is given as the solution to a minimax convex-concave optimization problem on four scalar optimization variables. We identify a new summary parameter, termed the expected Moreau envelope to play a central role in the error characterization. The precise nature of the results permits an accurate performance comparison between different instances of regularized <inline-formula> <tex-math notation="LaTeX">M </tex-math></inline-formula>-estimators and allows to optimally tune the involved parameters (such as the regularizer parameter and the number of measurements). The key ingredient of our proof is the convex Gaussian min-max theorem which is a tight and strengthened version of a classical Gaussian comparison inequality that was proved by Gordon in 1988.
ISSN:0018-9448
1557-9654
DOI:10.1109/TIT.2018.2840720