Robustly Stable Accelerated Momentum Methods With A Near-Optimal L2 Gain and $H_\infty$ Performance
We consider the problem of minimizing a strongly convex smooth function where the gradients are subject to additive worst-case deterministic errors that are square-summable. We study the trade-offs between the convergence rate and robustness to gradient errors when designing the parameters of a firs...
Saved in:
Main Author | |
---|---|
Format | Journal Article |
Language | English |
Published |
20.09.2023
|
Subjects | |
Online Access | Get full text |
DOI | 10.48550/arxiv.2309.11481 |
Cover
Loading…
Summary: | We consider the problem of minimizing a strongly convex smooth function where
the gradients are subject to additive worst-case deterministic errors that are
square-summable. We study the trade-offs between the convergence rate and
robustness to gradient errors when designing the parameters of a first-order
algorithm. We focus on a general class of momentum methods (GMM) with constant
stepsize and momentum parameters which can recover gradient descent, Nesterov's
accelerated gradient, the heavy-ball and the triple momentum methods as special
cases. We measure the robustness of an algorithm in terms of the cumulative
suboptimality over the iterations divided by the $\ell_2$ norm of the gradient
errors, which can be interpreted as the minimal (induced) $\ell_2$ gain of a
transformed dynamical system that represents the GMM iterations where the input
is the gradient error sequence and the output is a weighted distance to the
optimum. For quadratic objectives, we compute the induced $\ell_2$ gain
explicitly leveraging its connections to the $H_\infty$ norm of the dynamical
system corresponding to GMM and construct worst-case gradient error sequences
by a closed-form formula. We also study the stability of GMM with respect to
multiplicative noise in various settings by characterizing the structured real
and stability radius of the GMM system through their connections to the
$H_\infty$ norm. This allows us to compare GD, HB, NAG methods in terms of
robustness, and argue that HB is not as robust as NAG despite being the
fastest... |
---|---|
DOI: | 10.48550/arxiv.2309.11481 |