On Linear Convergence of Non-Euclidean Gradient Methods without Strong Convexity and Lipschitz Gradient Continuity

The gradient method is well known to globally converge linearly when the objective function is strongly convex and admits a Lipschitz continuous gradient. In many applications, both assumptions are often too stringent, precluding the use of gradient methods. In the early 1960s, after the amazing bre...

Full description

Saved in:
Bibliographic Details
Published inJournal of optimization theory and applications Vol. 182; no. 3; pp. 1068 - 1087
Main Authors Bauschke, Heinz H., Bolte, Jérôme, Chen, Jiawei, Teboulle, Marc, Wang, Xianfu
Format Journal Article
LanguageEnglish
Published New York Springer US 01.09.2019
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The gradient method is well known to globally converge linearly when the objective function is strongly convex and admits a Lipschitz continuous gradient. In many applications, both assumptions are often too stringent, precluding the use of gradient methods. In the early 1960s, after the amazing breakthrough of Łojasiewicz on gradient inequalities, it was observed that uniform convexity assumptions could be relaxed and replaced by these inequalities. On the other hand, very recently, it has been shown that the Lipschitz gradient continuity can be lifted and replaced by a class of functions satisfying a non-Euclidean descent property expressed in terms of a Bregman distance. In this note, we combine these two ideas to introduce a class of non-Euclidean gradient-like inequalities, allowing to prove linear convergence of a Bregman gradient method for nonconvex minimization, even when neither strong convexity nor Lipschitz gradient continuity holds.
ISSN:0022-3239
1573-2878
DOI:10.1007/s10957-019-01516-9