Lower bounds for non-convex stochastic optimization
We lower bound the complexity of finding ϵ -stationary points (with gradient norm at most ϵ ) using stochastic first-order methods. In a well-studied model where algorithms access smooth, potentially non-convex functions through queries to an unbiased stochastic gradient oracle with bounded variance...
Saved in:
Published in | Mathematical programming Vol. 199; no. 1-2; pp. 165 - 214 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
Berlin/Heidelberg
Springer Berlin Heidelberg
01.05.2023
Springer Springer Nature B.V |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | We lower bound the complexity of finding
ϵ
-stationary points (with gradient norm at most
ϵ
) using stochastic first-order methods. In a well-studied model where algorithms access smooth, potentially non-convex functions through queries to an unbiased stochastic gradient oracle with bounded variance, we prove that (in the worst case) any algorithm requires at least
ϵ
-
4
queries to find an
ϵ
-stationary point. The lower bound is tight, and establishes that stochastic gradient descent is minimax optimal in this model. In a more restrictive model where the noisy gradient estimates satisfy a mean-squared smoothness property, we prove a lower bound of
ϵ
-
3
queries, establishing the optimality of recently proposed variance reduction techniques. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 0025-5610 1436-4646 |
DOI: | 10.1007/s10107-022-01822-7 |