Randomized Smoothing for Stochastic Optimization

We analyze convergence rates of stochastic optimization algorithms for nonsmooth convex optimization problems. By combining randomized smoothing techniques with accelerated gradient methods, we obtain convergence rates of stochastic optimization procedures, both in expectation and with high probabil...

Full description

Saved in:
Bibliographic Details
Published inSIAM journal on optimization Vol. 22; no. 2; pp. 674 - 701
Main Authors Duchi, John C., Bartlett, Peter L., Wainwright, Martin J.
Format Journal Article
LanguageEnglish
Published Philadelphia Society for Industrial and Applied Mathematics 01.01.2012
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We analyze convergence rates of stochastic optimization algorithms for nonsmooth convex optimization problems. By combining randomized smoothing techniques with accelerated gradient methods, we obtain convergence rates of stochastic optimization procedures, both in expectation and with high probability, that have optimal dependence on the variance of the gradient estimates. To the best of our knowledge, these are the first variance-based rates for nonsmooth optimization. We give several applications of our results to statistical estimation problems and provide experimental results that demonstrate the effectiveness of the proposed algorithms. We also describe how a combination of our algorithm with recent work on decentralized optimization yields a distributed stochastic optimization algorithm that is order-optimal. [PUBLICATION ABSTRACT]
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1052-6234
1095-7189
DOI:10.1137/110831659