Asynchronous Decentralized Learning over Unreliable Wireless Networks
Decentralized learning enables edge users to collaboratively train models by exchanging information via device-to-device communication, yet prior works have been limited to wireless networks with fixed topologies and reliable workers. In this work, we propose an asynchronous decentralized stochastic...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
02.02.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Decentralized learning enables edge users to collaboratively train models by
exchanging information via device-to-device communication, yet prior works have
been limited to wireless networks with fixed topologies and reliable workers.
In this work, we propose an asynchronous decentralized stochastic gradient
descent (DSGD) algorithm, which is robust to the inherent computation and
communication failures occurring at the wireless network edge. We theoretically
analyze its performance and establish a non-asymptotic convergence guarantee.
Experimental results corroborate our analysis, demonstrating the benefits of
asynchronicity and outdated gradient information reuse in decentralized
learning over unreliable wireless networks. |
---|---|
DOI: | 10.48550/arxiv.2202.00955 |