Randomized Smoothing for Stochastic Optimization
We analyze convergence rates of stochastic optimization algorithms for nonsmooth convex optimization problems. By combining randomized smoothing techniques with accelerated gradient methods, we obtain convergence rates of stochastic optimization procedures, both in expectation and with high probabil...
Saved in:
Published in | SIAM journal on optimization Vol. 22; no. 2; pp. 674 - 701 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
Philadelphia
Society for Industrial and Applied Mathematics
01.01.2012
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | We analyze convergence rates of stochastic optimization algorithms for nonsmooth convex optimization problems. By combining randomized smoothing techniques with accelerated gradient methods, we obtain convergence rates of stochastic optimization procedures, both in expectation and with high probability, that have optimal dependence on the variance of the gradient estimates. To the best of our knowledge, these are the first variance-based rates for nonsmooth optimization. We give several applications of our results to statistical estimation problems and provide experimental results that demonstrate the effectiveness of the proposed algorithms. We also describe how a combination of our algorithm with recent work on decentralized optimization yields a distributed stochastic optimization algorithm that is order-optimal. [PUBLICATION ABSTRACT] |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 1052-6234 1095-7189 |
DOI: | 10.1137/110831659 |