GossipFL: A Decentralized Federated Learning Framework With Sparsified and Adaptive Communication

Recently, federated learning (FL) techniques have enabled multiple users to train machine learning models collaboratively without data sharing. However, existing FL algorithms suffer from the communication bottleneck due to network bandwidth pressure and/or low bandwidth utilization of the participa...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on parallel and distributed systems Vol. 34; no. 3; pp. 909 - 922
Main Authors Tang, Zhenheng, Shi, Shaohuai, Li, Bo, Chu, Xiaowen
Format Journal Article
LanguageEnglish
Published New York IEEE 01.03.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Recently, federated learning (FL) techniques have enabled multiple users to train machine learning models collaboratively without data sharing. However, existing FL algorithms suffer from the communication bottleneck due to network bandwidth pressure and/or low bandwidth utilization of the participating clients in both centralized and decentralized architectures. To deal with the communication problem while preserving the convergence performance, we introduce a communication-efficient decentralized FL framework GossipFL. In GossipFL, we 1) design a novel sparsification algorithm to enable that each client only needs to communicate with one peer with a highly sparsified model, and 2) propose a new and novel gossip matrix generation algorithm that can better utilize the bandwidth resources while preserving the convergence property. We also theoretically prove that GossipFL has convergence guarantees. We conduct experiments with three convolutional neural networks on two datasets (IID and non-IID) under two distributed environments (14 clients and 100 clients) to verify the effectiveness of GossipFL. Experimental results show that GossipFL takes less communication traffic for 38.5% and less communication time for <inline-formula><tex-math notation="LaTeX">49.8</tex-math> <mml:math><mml:mrow><mml:mn>49</mml:mn><mml:mo>.</mml:mo><mml:mn>8</mml:mn></mml:mrow></mml:math><inline-graphic xlink:href="chu-ieq1-3230938.gif"/> </inline-formula>% than state-of-the-art solutions while achieving comparative model accuracy.
AbstractList Recently, federated learning (FL) techniques have enabled multiple users to train machine learning models collaboratively without data sharing. However, existing FL algorithms suffer from the communication bottleneck due to network bandwidth pressure and/or low bandwidth utilization of the participating clients in both centralized and decentralized architectures. To deal with the communication problem while preserving the convergence performance, we introduce a communication-efficient decentralized FL framework GossipFL. In GossipFL, we 1) design a novel sparsification algorithm to enable that each client only needs to communicate with one peer with a highly sparsified model, and 2) propose a new and novel gossip matrix generation algorithm that can better utilize the bandwidth resources while preserving the convergence property. We also theoretically prove that GossipFL has convergence guarantees. We conduct experiments with three convolutional neural networks on two datasets (IID and non-IID) under two distributed environments (14 clients and 100 clients) to verify the effectiveness of GossipFL. Experimental results show that GossipFL takes less communication traffic for 38.5% and less communication time for [Formula Omitted]% than state-of-the-art solutions while achieving comparative model accuracy.
Recently, federated learning (FL) techniques have enabled multiple users to train machine learning models collaboratively without data sharing. However, existing FL algorithms suffer from the communication bottleneck due to network bandwidth pressure and/or low bandwidth utilization of the participating clients in both centralized and decentralized architectures. To deal with the communication problem while preserving the convergence performance, we introduce a communication-efficient decentralized FL framework GossipFL. In GossipFL, we 1) design a novel sparsification algorithm to enable that each client only needs to communicate with one peer with a highly sparsified model, and 2) propose a new and novel gossip matrix generation algorithm that can better utilize the bandwidth resources while preserving the convergence property. We also theoretically prove that GossipFL has convergence guarantees. We conduct experiments with three convolutional neural networks on two datasets (IID and non-IID) under two distributed environments (14 clients and 100 clients) to verify the effectiveness of GossipFL. Experimental results show that GossipFL takes less communication traffic for 38.5% and less communication time for <inline-formula><tex-math notation="LaTeX">49.8</tex-math> <mml:math><mml:mrow><mml:mn>49</mml:mn><mml:mo>.</mml:mo><mml:mn>8</mml:mn></mml:mrow></mml:math><inline-graphic xlink:href="chu-ieq1-3230938.gif"/> </inline-formula>% than state-of-the-art solutions while achieving comparative model accuracy.
Author Chu, Xiaowen
Li, Bo
Tang, Zhenheng
Shi, Shaohuai
Author_xml – sequence: 1
  givenname: Zhenheng
  orcidid: 0000-0001-8769-9974
  surname: Tang
  fullname: Tang, Zhenheng
  email: zhtang@comp.hkbu.edu.hk
  organization: Hong Kong Baptist University, Hong Kong
– sequence: 2
  givenname: Shaohuai
  orcidid: 0000-0002-1418-5160
  surname: Shi
  fullname: Shi, Shaohuai
  email: shaohuais@hit.edu.cn
  organization: Harbin Institute of Technology, Shenzhen, China
– sequence: 3
  givenname: Bo
  orcidid: 0000-0003-2083-9105
  surname: Li
  fullname: Li, Bo
  email: bli@cse.ust.hk
  organization: The Hong Kong University of Science and Technology, Hong Kong
– sequence: 4
  givenname: Xiaowen
  orcidid: 0000-0001-9745-4372
  surname: Chu
  fullname: Chu, Xiaowen
  email: xwchu@ust.hk
  organization: The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
BookMark eNp9kEtLw0AUhQepYFv9AeIm4Dp1nknGXWlNFQIKrbgM08wdndpO4iRV9Nc7tcWFC1f3wP3OfZwB6rnaAULnBI8IwfJq8TCdjyimdMQow5JlR6hPhMhiSjLWCxpzEUtK5AkatO0KY8IF5n2kZnXb2iYvrqNxNIUKXOfV2n6BjnLQ4FUXVAHKO-ueo9yrDXzU_jV6st1LNG-Ub62xAVFOR2Otms6-QzSpN5uts5XqbO1O0bFR6xbODnWIHvObxeQ2Lu5nd5NxEVdUsi5WEhItTCYFXyYmy5gWaaIzapKKC9B4ycUyVTzj2ijghlNhUi6FMSS0ksSwIbrcz218_baFtitX9da7sLKkaSI5piwlgUr3VOXD4x5MWdnu587wt12XBJe7PMtdnuUuz_KQZ3CSP87G243yn_96LvYeCwC_vJQyITRl3zQlg0o
CODEN ITDSEO
CitedBy_id crossref_primary_10_1109_LWC_2024_3457921
crossref_primary_10_1109_TCAD_2023_3346274
crossref_primary_10_1016_j_sysarc_2023_102927
crossref_primary_10_1109_TCOMM_2024_3362143
crossref_primary_10_1016_j_future_2024_07_045
crossref_primary_10_1109_TPDS_2024_3501581
crossref_primary_10_1007_s13042_024_02119_1
crossref_primary_10_1109_TWC_2024_3432758
crossref_primary_10_1016_j_ins_2024_120582
crossref_primary_10_1109_TPDS_2025_3539738
crossref_primary_10_1109_TMC_2024_3369188
crossref_primary_10_1016_j_inffus_2023_102028
crossref_primary_10_1109_JIOT_2024_3379395
crossref_primary_10_1016_j_comnet_2024_110663
crossref_primary_10_1016_j_patcog_2025_111362
crossref_primary_10_1109_JIOT_2024_3417212
crossref_primary_10_1109_JIOT_2023_3264611
crossref_primary_10_26599_BDMA_2024_9020029
crossref_primary_10_3390_electronics13061105
crossref_primary_10_1109_TPDS_2024_3413751
crossref_primary_10_1109_JIOT_2023_3329128
crossref_primary_10_1109_JIOT_2024_3481257
crossref_primary_10_1109_TPDS_2023_3277423
crossref_primary_10_1109_TNSE_2024_3447904
crossref_primary_10_1109_OJCOMS_2024_3458088
crossref_primary_10_3390_info15070379
crossref_primary_10_1016_j_icte_2024_12_005
crossref_primary_10_1109_JIOT_2024_3400512
crossref_primary_10_1109_TII_2024_3468446
crossref_primary_10_1145_3625558
crossref_primary_10_32604_cmes_2023_043247
crossref_primary_10_1109_TNET_2023_3329005
crossref_primary_10_1109_COMST_2023_3315746
crossref_primary_10_1109_TMLCN_2024_3409205
crossref_primary_10_1016_j_neunet_2025_107199
crossref_primary_10_1109_COMST_2023_3316615
crossref_primary_10_1109_JIOT_2024_3453964
Cites_doi 10.1109/ICC47138.2019.9123209
10.1109/INFCOM.2005.1498447
10.1007/978-3-030-63076-8_2
10.1109/JIOT.2021.3095077
10.1109/SFCS.2000.892324
10.1145/3298981
10.1109/ICDCS47774.2020.00153
10.1109/MNET.112.2100706
10.1609/aaai.v32i1.11728
10.1109/INFOCOM41043.2020.9155269
10.1016/j.neucom.2018.11.002
10.1145/3485730.3485929
10.1109/CVPR.2016.90
10.1137/16M1081257
10.1007/978-3-030-22496-7_5
10.1109/MNET.011.2000263
10.24963/ijcai.2019/473
10.1109/TPDS.2020.3046440
10.1145/3429252
10.1109/TIT.2006.874516
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023
DBID 97E
RIA
RIE
AAYXX
CITATION
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
DOI 10.1109/TPDS.2022.3230938
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Xplore
CrossRef
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
DatabaseTitleList Technology Research Database

Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
Computer Science
EISSN 1558-2183
EndPage 922
ExternalDocumentID 10_1109_TPDS_2022_3230938
9996127
Genre orig-research
GrantInformation_xml – fundername: RGC GRF
  grantid: 16209120; 16200221
– fundername: RGC RIF
  grantid: R6021-20
GroupedDBID --Z
-~X
.DC
0R~
29I
4.4
5GY
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACIWK
AENEX
AGQYO
AHBIQ
AKJIK
AKQYR
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
EBS
EJD
HZ~
IEDLZ
IFIPE
IPLJI
JAVBF
LAI
M43
MS~
O9-
OCL
P2P
PQQKQ
RIA
RIE
RNS
TN5
TWZ
UHB
AAYXX
CITATION
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c293t-a9e6d5f8954b6f883d576d82f6c45ed0b45b7a484dfae4f425f7495ff10b466f3
IEDL.DBID RIE
ISSN 1045-9219
IngestDate Mon Jun 30 04:14:40 EDT 2025
Tue Jul 01 03:58:41 EDT 2025
Thu Apr 24 23:13:02 EDT 2025
Wed Aug 27 02:00:17 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 3
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c293t-a9e6d5f8954b6f883d576d82f6c45ed0b45b7a484dfae4f425f7495ff10b466f3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0001-8769-9974
0000-0003-2083-9105
0000-0002-1418-5160
0000-0001-9745-4372
PQID 2769402371
PQPubID 85437
PageCount 14
ParticipantIDs ieee_primary_9996127
proquest_journals_2769402371
crossref_primary_10_1109_TPDS_2022_3230938
crossref_citationtrail_10_1109_TPDS_2022_3230938
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2023-03-01
PublicationDateYYYYMMDD 2023-03-01
PublicationDate_xml – month: 03
  year: 2023
  text: 2023-03-01
  day: 01
PublicationDecade 2020
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle IEEE transactions on parallel and distributed systems
PublicationTitleAbbrev TPDS
PublicationYear 2023
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref12
Vandegriend (ref42) 1998; 9
Li (ref5)
ref16
ref19
Luo (ref25) 2021
Karimireddy (ref37)
ref51
ref50
Lin (ref45)
Wen (ref43)
ref46
Tang (ref20) 2020
Lin (ref11) 2021
ref47
Zhang (ref22)
Yu (ref9)
Alistarh (ref44)
ref49
Gibbons (ref31) 1985
ref4
ref3
Krizhevsky (ref39) 2010
ref40
Wang (ref35)
Shi (ref48)
Luo (ref33) 2019
ref34
Lian (ref53)
ref36
Rothchild (ref10)
ref30
McMahan (ref2)
Lian (ref13)
He (ref41) 2020
Hu (ref17)
He (ref32) 2021
ref24
ref23
Koloskova (ref18)
Koloskova (ref26)
Tang (ref6)
LeCun (ref38) 2010; 2
Konečnỳ (ref1)
ref21
ref28
ref27
Bonawitz (ref8) 2019
Tang (ref14)
ref29
Hsieh (ref7)
Hsieh (ref15)
Daily (ref52) 2018
References_xml – start-page: 493
  volume-title: Proc. USENIX Conf. Usenix Annu. Tech. Conf.
  ident: ref22
  article-title: BatchCrypt: Efficient homomorphic encryption for cross-silo federated learning
– start-page: 1
  volume-title: Proc. Int. Workshop Federated Mach. Learn. User Privacy Data Confidentiality
  ident: ref17
  article-title: Decentralized federated learning: A segmented gossip approach
– start-page: 3043
  volume-title: Proc. 35th Int. Conf. Mach. Learn.
  ident: ref53
  article-title: Asynchronous decentralized parallel stochastic gradient descent
– ident: ref27
  doi: 10.1109/ICC47138.2019.9123209
– volume-title: Proc. 37th Int. Conf. Mach. Learn.
  ident: ref15
  article-title: The non-IID data quagmire of decentralized machine learning
– year: 2010
  ident: ref39
  article-title: CIFAR-10 (Canadian Institute for Advanced Research)
– ident: ref28
  doi: 10.1109/INFCOM.2005.1498447
– volume-title: Proc. 37th Int. Conf. Mach. Learn.
  ident: ref10
  article-title: FetchSGD: Communication-efficient federated learning with sketching
– start-page: 1
  volume-title: Proc. Int. Conf. Learn. Representations
  ident: ref5
  article-title: On the convergence of FedAvg on non-IID data
– ident: ref34
  doi: 10.1007/978-3-030-63076-8_2
– start-page: 7202
  volume-title: Proc. 36th Int. Conf. Mach. Learn.
  ident: ref9
  article-title: Distributed learning over unreliable networks
– ident: ref12
  doi: 10.1109/JIOT.2021.3095077
– ident: ref30
  doi: 10.1109/SFCS.2000.892324
– ident: ref3
  doi: 10.1145/3298981
– start-page: 7663
  volume-title: Proc. 32nd Int. Conf. Neural Inf. Process. Syst.
  ident: ref14
  article-title: Communication compression for decentralized training
– start-page: 1707
  volume-title: Proc. 31st Int. Conf. Neural Inf. Process. Syst.
  ident: ref44
  article-title: QSGD: Communication-efficient SGD via gradient quantization and encoding
– ident: ref21
  doi: 10.1109/ICDCS47774.2020.00153
– volume: 9
  start-page: 219
  issue: 1
  year: 1998
  ident: ref42
  article-title: The Gn,mphase transition is not hard for the hamiltonian cycle problem
  publication-title: J. Artif. Int. Res.
– volume-title: Algorithmic Graph Theory
  year: 1985
  ident: ref31
– start-page: 9706
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref35
  article-title: Variational model inversion attacks
– ident: ref4
  doi: 10.1109/MNET.112.2100706
– year: 2021
  ident: ref32
  article-title: FedCV: A federated learning framework for diverse computer vision tasks
– year: 2020
  ident: ref41
  article-title: FedML: A research library and benchmark for federated machine learning
– start-page: 1509
  volume-title: Proc. 31st Int. Conf. Neural Inf. Process. Syst.
  ident: ref43
  article-title: TernGrad: Ternary gradients to reduce communication in distributed deep learning
– year: 2020
  ident: ref20
  article-title: Communication-efficient distributed deep learning: A comprehensive survey
– start-page: 1
  volume-title: Proc. Conf. Neural Inf. Process. Syst. Workshop
  ident: ref1
  article-title: Federated learning: Strategies for improving communication efficiency
– ident: ref36
  doi: 10.1609/aaai.v32i1.11728
– start-page: 5336
  volume-title: Proc. 31st Int. Conf. Neural Inf. Process. Syst.
  ident: ref13
  article-title: Can decentralized algorithms outperform centralized algorithms? A case study for decentralized parallel stochastic gradient descent
– start-page: 3478
  volume-title: Proc. 36th Int. Conf. Mach. Learn.
  ident: ref26
  article-title: Decentralized stochastic optimization and gossip algorithms with compressed communication
– year: 2021
  ident: ref25
  article-title: Tackling system and statistical heterogeneity for federated learning with adaptive client sampling
– start-page: 1
  volume-title: Proc. Int. Conf. Learn. Representations
  ident: ref45
  article-title: Deep gradient compression: Reducing the communication bandwidth for distributed training
– start-page: 1273
  volume-title: Proc. 20th Int. Conf. Artif. Intell. Statist.
  ident: ref2
  article-title: Communication-efficient learning of deep networks from decentralized data
– volume: 2
  year: 2010
  ident: ref38
  article-title: MNIST handwritten digit database
  publication-title: AT&T Labs
– start-page: 1
  volume-title: Proc. Int. Conf. Learn. Representations
  ident: ref18
  article-title: Decentralized deep learning with arbitrary communication compression
– ident: ref47
  doi: 10.1109/INFOCOM41043.2020.9155269
– ident: ref51
  doi: 10.1016/j.neucom.2018.11.002
– start-page: 629
  volume-title: Proc. 14th USENIX Conf. Netw. Syst. Des. Implementation
  ident: ref7
  article-title: Gaia: Geo-distributed machine learning approaching LAN speeds
– ident: ref49
  doi: 10.1145/3485730.3485929
– ident: ref40
  doi: 10.1109/CVPR.2016.90
– ident: ref50
  doi: 10.1137/16M1081257
– ident: ref16
  doi: 10.1007/978-3-030-22496-7_5
– start-page: 21111
  volume-title: Proc. 39th Int. Conf. Mach. Learn.
  ident: ref6
  article-title: Virtual homogeneity learning: Defending against data heterogeneity in federated learning
– ident: ref23
  doi: 10.1109/MNET.011.2000263
– year: 2021
  ident: ref11
  article-title: How to train your neural network: A comparative evaluation
– start-page: 401
  volume-title: Proc. 4th MLSys Conf.
  ident: ref48
  article-title: Towards scalable distributed training of deep learning on public cloud clusters
– ident: ref46
  doi: 10.24963/ijcai.2019/473
– ident: ref19
  doi: 10.1109/TPDS.2020.3046440
– year: 2018
  ident: ref52
  article-title: GossipGraD: Scalable deep learning using gossip communication based asynchronous gradient descent
– year: 2019
  ident: ref8
  article-title: Towards federated learning at scale: System design
– ident: ref24
  doi: 10.1145/3429252
– ident: ref29
  doi: 10.1109/TIT.2006.874516
– year: 2019
  ident: ref33
  article-title: Real-world image datasets for federated learning
– start-page: 3252
  volume-title: Proc. 36th Int. Conf. Mach. Learn.
  ident: ref37
  article-title: Error feedback fixes SignSGD and other gradient compression schemes
SSID ssj0014504
Score 2.5891535
Snippet Recently, federated learning (FL) techniques have enabled multiple users to train machine learning models collaboratively without data sharing. However,...
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 909
SubjectTerms Algorithms
Artificial neural networks
Bandwidth
Bandwidths
Clients
Communication
communication efficiency
Communications traffic
Convergence
Data models
Deep learning
Federated learning
Machine learning
Model accuracy
Servers
Topology
Training
Title GossipFL: A Decentralized Federated Learning Framework With Sparsified and Adaptive Communication
URI https://ieeexplore.ieee.org/document/9996127
https://www.proquest.com/docview/2769402371
Volume 34
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1JS8QwFH6oJz04rjhu5OBJ7NglSRtvg1pFVAQVvZWkSXRQxkE7F3-9L206uCHeCkkg8OUl39e3AexIGVuj0ZCkRSOnJbeBilQa8BJHBJ4YLp1H9-KSn97Ss3t2PwV7k1wYY0wdfGZ67rP25euXcux-le07ch7F6TRMo3BrcrUmHgPK6laBqC5YINAMvQczCsX-zdXRNSrBOO4lsXP8ZV_eoLqpyo-buH5e8g5ctBtrokqeeuNK9cr3bzUb_7vzBZj3PJP0m4OxCFNmuASdtocD8Sa9BHOfChIugzzB_Q5G-fkB6ZMj4yM3B-9Gk9xVnUBiqokvyfpA8jawi9wNqkdyPUKRPLDIaYkcatLXcuTuUvIlB2UFbvPjm8PTwDdhCEpkAlUgheGa2UwwqrjNskSjQtFZbHlJmdGhokylkmZUW2moxSvApii6LEKvKOc2WYWZ4cvQrAFJQiO40DRJFfIYmUiWSS4ki7mMSpXZLoQtLEXpK5S7RhnPRa1UQlE4JAuHZOGR7MLuZMmoKc_x1-Rlh8xkogelC5st9oU34LciTrlAaZ2k0frvqzZg1nWeb8LRNmGmeh2bLeQnldquD-YHi_rh4w
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3PT9swFH5icNg4AIMhyk8fdpqWkji2E-9WDUK3tWgSReMW2bG9VZtKtaUX_nqeE6eCDaHdItmWLH1-9vfl_QJ4qxR11qAhKYdGzirhIp3oLBIVjkg8MUJ5j-74Ugyv2ecbfrMC75e5MNbaJvjM9v1n48s3t9XC_yo79eQ8odkLWMN3n9M2W2vpM2C8aRaI-oJHEg0x-DCTWJ5Ovp5doRaktJ9S7_rLH71CTVuVf-7i5oEpNmHcba2NK_nZX9S6X939VbXxf_e-BRuBaZJBezRew4qdbcNm18WBBKPehvUHJQl3QF3gfqfzYvSBDMiZDbGb0ztrSOHrTiA1NSQUZf1Oii60i3yb1j_I1Rxl8tQhqyVqZsjAqLm_TcmjLJQ3cF2cTz4Oo9CGIaqQC9SRklYY7nLJmRYuz1ODGsXk1ImKcWtizbjOFMuZccoyh5eAy1B2OQRfMyFcugurs9uZ3QOSxlYKaViaaWQyKlU8V0IqToVKKp27HsQdLGUVapT7Vhm_ykarxLL0SJYeyTIg2YN3yyXztkDHc5N3PDLLiQGUHhx22JfBhP-UNBMSxXWaJftPrzqBl8PJeFSOPl1-OYBXvg99G5x2CKv174U9QrZS6-PmkN4Df3PlLQ
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=GossipFL%3A+A+Decentralized+Federated+Learning+Framework+With+Sparsified+and+Adaptive+Communication&rft.jtitle=IEEE+transactions+on+parallel+and+distributed+systems&rft.au=Tang%2C+Zhenheng&rft.au=Shi%2C+Shaohuai&rft.au=Li%2C+Bo&rft.au=Chu%2C+Xiaowen&rft.date=2023-03-01&rft.issn=1045-9219&rft.eissn=1558-2183&rft.volume=34&rft.issue=3&rft.spage=909&rft.epage=922&rft_id=info:doi/10.1109%2FTPDS.2022.3230938&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TPDS_2022_3230938
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1045-9219&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1045-9219&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1045-9219&client=summon