GossipFL: A Decentralized Federated Learning Framework With Sparsified and Adaptive Communication

Recently, federated learning (FL) techniques have enabled multiple users to train machine learning models collaboratively without data sharing. However, existing FL algorithms suffer from the communication bottleneck due to network bandwidth pressure and/or low bandwidth utilization of the participa...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on parallel and distributed systems Vol. 34; no. 3; pp. 909 - 922
Main Authors Tang, Zhenheng, Shi, Shaohuai, Li, Bo, Chu, Xiaowen
Format Journal Article
LanguageEnglish
Published New York IEEE 01.03.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Recently, federated learning (FL) techniques have enabled multiple users to train machine learning models collaboratively without data sharing. However, existing FL algorithms suffer from the communication bottleneck due to network bandwidth pressure and/or low bandwidth utilization of the participating clients in both centralized and decentralized architectures. To deal with the communication problem while preserving the convergence performance, we introduce a communication-efficient decentralized FL framework GossipFL. In GossipFL, we 1) design a novel sparsification algorithm to enable that each client only needs to communicate with one peer with a highly sparsified model, and 2) propose a new and novel gossip matrix generation algorithm that can better utilize the bandwidth resources while preserving the convergence property. We also theoretically prove that GossipFL has convergence guarantees. We conduct experiments with three convolutional neural networks on two datasets (IID and non-IID) under two distributed environments (14 clients and 100 clients) to verify the effectiveness of GossipFL. Experimental results show that GossipFL takes less communication traffic for 38.5% and less communication time for <inline-formula><tex-math notation="LaTeX">49.8</tex-math> <mml:math><mml:mrow><mml:mn>49</mml:mn><mml:mo>.</mml:mo><mml:mn>8</mml:mn></mml:mrow></mml:math><inline-graphic xlink:href="chu-ieq1-3230938.gif"/> </inline-formula>% than state-of-the-art solutions while achieving comparative model accuracy.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1045-9219
1558-2183
DOI:10.1109/TPDS.2022.3230938