A Richer Theory of Convex Constrained Optimization with Reduced Projections and Improved Rates
This paper focuses on convex constrained optimization problems, where the solution is subject to a convex inequality constraint. In particular, we aim at challenging problems for which both projection into the constrained domain and a linear optimization under the inequality constraint are time-cons...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
11.08.2016
|
Subjects | |
Online Access | Get full text |
DOI | 10.48550/arxiv.1608.03487 |
Cover
Loading…
Summary: | This paper focuses on convex constrained optimization problems, where the
solution is subject to a convex inequality constraint. In particular, we aim at
challenging problems for which both projection into the constrained domain and
a linear optimization under the inequality constraint are time-consuming, which
render both projected gradient methods and conditional gradient methods (a.k.a.
the Frank-Wolfe algorithm) expensive. In this paper, we develop projection
reduced optimization algorithms for both smooth and non-smooth optimization
with improved convergence rates under a certain regularity condition of the
constraint function. We first present a general theory of optimization with
only one projection. Its application to smooth optimization with only one
projection yields $O(1/\epsilon)$ iteration complexity, which improves over the
$O(1/\epsilon^2)$ iteration complexity established before for non-smooth
optimization and can be further reduced under strong convexity. Then we
introduce a local error bound condition and develop faster algorithms for
non-strongly convex optimization at the price of a logarithmic number of
projections. In particular, we achieve an iteration complexity of $\widetilde
O(1/\epsilon^{2(1-\theta)})$ for non-smooth optimization and $\widetilde
O(1/\epsilon^{1-\theta})$ for smooth optimization, where $\theta\in(0,1]$
appearing the local error bound condition characterizes the functional local
growth rate around the optimal solutions. Novel applications in solving the
constrained $\ell_1$ minimization problem and a positive semi-definite
constrained distance metric learning problem demonstrate that the proposed
algorithms achieve significant speed-up compared with previous algorithms. |
---|---|
DOI: | 10.48550/arxiv.1608.03487 |