ZenoPS: A Distributed Learning System Integrating Communication Efficiency and Security

Distributed machine learning is primarily motivated by the promise of increased computation power for accelerating training and mitigating privacy concerns. Unlike machine learning on a single device, distributed machine learning requires collaboration and communication among the devices. This creat...

Full description

Saved in:
Bibliographic Details
Published inAlgorithms Vol. 15; no. 7; p. 233
Main Authors Xie, Cong, Koyejo, Oluwasanmi, Gupta, Indranil
Format Journal Article
LanguageEnglish
Published Basel MDPI AG 01.07.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Distributed machine learning is primarily motivated by the promise of increased computation power for accelerating training and mitigating privacy concerns. Unlike machine learning on a single device, distributed machine learning requires collaboration and communication among the devices. This creates several new challenges: (1) the heavy communication overhead can be a bottleneck that slows down the training, and (2) the unreliable communication and weaker control over the remote entities make the distributed system vulnerable to systematic failures and malicious attacks. This paper presents a variant of stochastic gradient descent (SGD) with improved communication efficiency and security in distributed environments. Our contributions include (1) a new technique called error reset to adapt both infrequent synchronization and message compression for communication reduction in both synchronous and asynchronous training, (2) new score-based approaches for validating the updates, and (3) integration with both error reset and score-based validation. The proposed system provides communication reduction, both synchronous and asynchronous training, Byzantine tolerance, and local privacy preservation. We evaluate our techniques both theoretically and empirically.
ISSN:1999-4893
1999-4893
DOI:10.3390/a15070233