Iteratively reweighted least squares minimization for sparse recovery

Under certain conditions (known as the restricted isometry property, or RIP) on the m × N matrix Φ (where m < N), vectors x ∈ ℝN that are sparse (i.e., have most of their entries equal to 0) can be recovered exactly from y := Φx even though Φ−1(y) is typically an (N − m)—dimensional hyperplane; i...

Full description

Saved in:
Bibliographic Details
Published inCommunications on pure and applied mathematics Vol. 63; no. 1; pp. 1 - 38
Main Authors Daubechies, Ingrid, DeVore, Ronald, Fornasier, Massimo, Güntürk, C. Si̇nan
Format Journal Article
LanguageEnglish
Published Hoboken Wiley Subscription Services, Inc., A Wiley Company 01.01.2010
Wiley
John Wiley and Sons, Limited
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Under certain conditions (known as the restricted isometry property, or RIP) on the m × N matrix Φ (where m < N), vectors x ∈ ℝN that are sparse (i.e., have most of their entries equal to 0) can be recovered exactly from y := Φx even though Φ−1(y) is typically an (N − m)—dimensional hyperplane; in addition, x is then equal to the element in Φ−1(y) of minimal 𝓁1‐norm. This minimal element can be identified via linear programming algorithms. We study an alternative method of determining x, as the limit of an iteratively reweighted least squares (IRLS) algorithm. The main step of this IRLS finds, for a given weight vector w, the element in Φ−1(y) with smallest 𝓁2(w)‐norm. If x(n) is the solution at iteration step n, then the new weight w(n) is defined by w i(n) := [|x i(n)|2 + ε n2]−1/2, i = 1, …, N, for a decreasing sequence of adaptively defined εn; this updated weight is then used to obtain x(n + 1) and the process is repeated. We prove that when Φ satisfies the RIP conditions, the sequence x(n) converges for all y, regardless of whether Φ−1(y) contains a sparse vector. If there is a sparse vector in Φ−1(y), then the limit is this sparse vector, and when x(n) is sufficiently close to the limit, the remaining steps of the algorithm converge exponentially fast (linear convergence in the terminology of numerical optimization). The same algorithm with the “heavier” weight w i(n) = [|x i(n)|2 + ε n2]−1+τ/2, i = 1, …, N, where 0 < τ < 1, can recover sparse solutions as well; more importantly, we show its local convergence is superlinear and approaches a quadratic rate for τ approaching 0. © 2009 Wiley Periodicals, Inc.
Bibliography:European Union via the Marie Curie Individual Fellowship - No. MOIF-CT-2006-039438
New York University Goddard Fellowship
Army Research Office Contract DAAD - No. 19-02-1-0028
Program in Applied and Computational Mathematics at Princeton University
istex:CD6CECC9EF52884DD8F2F98B07B9D18CCB8FD56B
ark:/67375/WNG-JDLDJR9P-3
National Science Foundation - No. DMS-0504924; No. DMS-0530865; No. DMS-0221642; No. DMS-0200187; No. CCF-0515187
Courant Institute
ArticleID:CPA20303
Alfred P. Sloan Research Fellowship
Office of Naval Research - No. ONR-N00014-03-1-0051; No. ONR/DEPSCoR N00014-03-1-0675; No. ONRsol;-DEPSCoR N00014-00-1-0470
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 14
ISSN:0010-3640
1097-0312
DOI:10.1002/cpa.20303