An Accelerated Second-Order Method for Distributed Stochastic Optimization

We consider centralized distributed algorithms for general stochastic convex optimization problems which we approximate by a finite-sum minimization problem with summands distributed among computational nodes. We exploit statistical similarity between the summands and the whole sum to construct a di...

Full description

Saved in:
Bibliographic Details
Published in2021 60th IEEE Conference on Decision and Control (CDC) pp. 2407 - 2413
Main Authors Agafonov, Artem, Dvurechensky, Pavel, Scutari, Gesualdo, Gasnikov, Alexander, Kamzolov, Dmitry, Lukashevich, Aleksandr, Daneshmand, Amir
Format Conference Proceeding
LanguageEnglish
Published IEEE 14.12.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We consider centralized distributed algorithms for general stochastic convex optimization problems which we approximate by a finite-sum minimization problem with summands distributed among computational nodes. We exploit statistical similarity between the summands and the whole sum to construct a distributed accelerated cubic-regularized Newton's method that achieves lower communication complexity bound for this setting and improves upon existing upper bound. Further, we use this algorithm to obtain convergence rate bounds for the original stochastic optimization problem and compare our bounds with the existing ones in several regimes when the goal is to minimize the number of communication rounds and improve the parallelization when increasing the number of workers.
ISSN:2576-2370
DOI:10.1109/CDC45484.2021.9683400