An Accelerated Second-Order Method for Distributed Stochastic Optimization
We consider centralized distributed algorithms for general stochastic convex optimization problems which we approximate by a finite-sum minimization problem with summands distributed among computational nodes. We exploit statistical similarity between the summands and the whole sum to construct a di...
Saved in:
Published in | 2021 60th IEEE Conference on Decision and Control (CDC) pp. 2407 - 2413 |
---|---|
Main Authors | , , , , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
14.12.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | We consider centralized distributed algorithms for general stochastic convex optimization problems which we approximate by a finite-sum minimization problem with summands distributed among computational nodes. We exploit statistical similarity between the summands and the whole sum to construct a distributed accelerated cubic-regularized Newton's method that achieves lower communication complexity bound for this setting and improves upon existing upper bound. Further, we use this algorithm to obtain convergence rate bounds for the original stochastic optimization problem and compare our bounds with the existing ones in several regimes when the goal is to minimize the number of communication rounds and improve the parallelization when increasing the number of workers. |
---|---|
ISSN: | 2576-2370 |
DOI: | 10.1109/CDC45484.2021.9683400 |