Variance-Reduced Decentralized Stochastic Optimization With Accelerated Convergence

This paper describes a novel algorithmic framework to minimize a finite-sum of functions available over a network of nodes. The proposed framework, that we call  GT-VR , is stochastic and decentralized, and thus is particularly suitable for problems where large-scale, potentially private data, canno...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on signal processing Vol. 68; pp. 6255 - 6271
Main Authors Xin, Ran, Khan, Usman A., Kar, Soummya
Format Journal Article
LanguageEnglish
Published New York IEEE 2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper describes a novel algorithmic framework to minimize a finite-sum of functions available over a network of nodes. The proposed framework, that we call  GT-VR , is stochastic and decentralized, and thus is particularly suitable for problems where large-scale, potentially private data, cannot be collected or processed at a centralized server. The GT-VR  framework leads to a family of algorithms with two key ingredients: (i) local variance reduction , that enables estimating the local batch gradients from arbitrarily drawn samples of local data; and, (ii) global gradient tracking , which fuses the gradient information across the nodes. Naturally, combining different variance reduction and gradient tracking techniques leads to different algorithms of interest with valuable practical tradeoffs and design considerations. Our focus in this paper is on two instantiations of the <inline-formula><tex-math notation="LaTeX">{\bf \mathtt {GT-VR}}</tex-math></inline-formula> framework, namely  GT-SAGA and  GT-SVRG , that, similar to their centralized counterparts ( SAGA  and  SVRG ), exhibit a compromise between space and time. We show that both  GT-SAGA and  GT-SVRG achieve accelerated linear convergence for smooth and strongly convex problems and further describe the regimes in which they achieve non-asymptotic, network-independent linear convergence rates that are faster with respect to the existing decentralized first-order schemes. Moreover, we show that both algorithms achieve a linear speedup in such regimes compared to their centralized counterparts that process all data at a single node. Extensive simulations illustrate the convergence behavior of the corresponding algorithms.
ISSN:1053-587X
1941-0476
DOI:10.1109/TSP.2020.3031071