Convergence rates for an inexact ADMM applied to separable convex optimization

Convergence rates are established for an inexact accelerated alternating direction method of multipliers (I-ADMM) for general separable convex optimization with a linear constraint. Both ergodic and non-ergodic iterates are analyzed. Relative to the iteration number k , the convergence rate is O ( 1...

Full description

Saved in:
Bibliographic Details
Published inComputational optimization and applications Vol. 77; no. 3; pp. 729 - 754
Main Authors Hager, William W., Zhang, Hongchao
Format Journal Article
LanguageEnglish
Published New York Springer US 01.12.2020
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Convergence rates are established for an inexact accelerated alternating direction method of multipliers (I-ADMM) for general separable convex optimization with a linear constraint. Both ergodic and non-ergodic iterates are analyzed. Relative to the iteration number k , the convergence rate is O ( 1 / k ) in a convex setting and O ( 1 / k 2 ) in a strongly convex setting. When an error bound condition holds, the algorithm is 2-step linearly convergent. The I-ADMM is designed so that the accuracy of the inexact iteration preserves the global convergence rates of the exact iteration, leading to better numerical performance in the test problems.
ISSN:0926-6003
1573-2894
DOI:10.1007/s10589-020-00221-y