Sharper Convergence Guarantees for Asynchronous SGD for Distributed and Federated Learning
We study the asynchronous stochastic gradient descent algorithm for distributed training over $n$ workers which have varying computation and communication frequency over time. In this algorithm, workers compute stochastic gradients in parallel at their own pace and return those to the server without...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
16.06.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | We study the asynchronous stochastic gradient descent algorithm for
distributed training over $n$ workers which have varying computation and
communication frequency over time. In this algorithm, workers compute
stochastic gradients in parallel at their own pace and return those to the
server without any synchronization. Existing convergence rates of this
algorithm for non-convex smooth objectives depend on the maximum gradient delay
$\tau_{\max}$ and show that an $\epsilon$-stationary point is reached after
$\mathcal{O}\!\left(\sigma^2\epsilon^{-2}+ \tau_{\max}\epsilon^{-1}\right)$
iterations, where $\sigma$ denotes the variance of stochastic gradients.
In this work (i) we obtain a tighter convergence rate of
$\mathcal{O}\!\left(\sigma^2\epsilon^{-2}+
\sqrt{\tau_{\max}\tau_{avg}}\epsilon^{-1}\right)$ without any change in the
algorithm where $\tau_{avg}$ is the average delay, which can be significantly
smaller than $\tau_{\max}$. We also provide (ii) a simple delay-adaptive
learning rate scheme, under which asynchronous SGD achieves a convergence rate
of $\mathcal{O}\!\left(\sigma^2\epsilon^{-2}+ \tau_{avg}\epsilon^{-1}\right)$,
and does not require any extra hyperparameter tuning nor extra communications.
Our result allows to show for the first time that asynchronous SGD is always
faster than mini-batch SGD. In addition, (iii) we consider the case of
heterogeneous functions motivated by federated learning applications and
improve the convergence rate by proving a weaker dependence on the maximum
delay compared to prior works. In particular, we show that the heterogeneity
term in convergence rate is only affected by the average delay within each
worker. |
---|---|
DOI: | 10.48550/arxiv.2206.08307 |