Convergence rates for an inexact ADMM applied to separable convex optimization
Convergence rates are established for an inexact accelerated alternating direction method of multipliers (I-ADMM) for general separable convex optimization with a linear constraint. Both ergodic and non-ergodic iterates are analyzed. Relative to the iteration number k , the convergence rate is O ( 1...
Saved in:
Published in | Computational optimization and applications Vol. 77; no. 3; pp. 729 - 754 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
New York
Springer US
01.12.2020
Springer Nature B.V |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Convergence rates are established for an inexact accelerated alternating direction method of multipliers (I-ADMM) for general separable convex optimization with a linear constraint. Both ergodic and non-ergodic iterates are analyzed. Relative to the iteration number
k
, the convergence rate is
O
(
1
/
k
)
in a convex setting and
O
(
1
/
k
2
)
in a strongly convex setting. When an error bound condition holds, the algorithm is 2-step linearly convergent. The I-ADMM is designed so that the accuracy of the inexact iteration preserves the global convergence rates of the
exact
iteration, leading to better numerical performance in the test problems. |
---|---|
ISSN: | 0926-6003 1573-2894 |
DOI: | 10.1007/s10589-020-00221-y |