Sharper Convergence Guarantees for Asynchronous SGD for Distributed and Federated Learning

We study the asynchronous stochastic gradient descent algorithm for distributed training over $n$ workers which have varying computation and communication frequency over time. In this algorithm, workers compute stochastic gradients in parallel at their own pace and return those to the server without...

Full description

Saved in:
Bibliographic Details
Main Authors Koloskova, Anastasia, Stich, Sebastian U, Jaggi, Martin
Format Journal Article
LanguageEnglish
Published 16.06.2022
Subjects
Online AccessGet full text

Cover

Loading…