A framework for bilevel optimization that enables stochastic and global variance reduction algorithms
Bilevel optimization, the problem of minimizing a value function which involves the arg-minimum of another function, appears in many areas of machine learning. In a large scale empirical risk minimization setting where the number of samples is huge, it is crucial to develop stochastic methods, which...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
31.01.2022
|
Subjects | |
Online Access | Get full text |
DOI | 10.48550/arxiv.2201.13409 |
Cover
Summary: | Bilevel optimization, the problem of minimizing a value function which
involves the arg-minimum of another function, appears in many areas of machine
learning. In a large scale empirical risk minimization setting where the number
of samples is huge, it is crucial to develop stochastic methods, which only use
a few samples at a time to progress. However, computing the gradient of the
value function involves solving a linear system, which makes it difficult to
derive unbiased stochastic estimates. To overcome this problem we introduce a
novel framework, in which the solution of the inner problem, the solution of
the linear system, and the main variable evolve at the same time. These
directions are written as a sum, making it straightforward to derive unbiased
estimates. The simplicity of our approach allows us to develop global variance
reduction algorithms, where the dynamics of all variables is subject to
variance reduction. We demonstrate that SABA, an adaptation of the celebrated
SAGA algorithm in our framework, has $O(\frac1T)$ convergence rate, and that it
achieves linear convergence under Polyak-Lojasciewicz assumption. This is the
first stochastic algorithm for bilevel optimization that verifies either of
these properties. Numerical experiments validate the usefulness of our method. |
---|---|
DOI: | 10.48550/arxiv.2201.13409 |