Node Selection Toward Faster Convergence for Federated Learning on Non-IID Data
Federated Learning (FL) is a distributed learning paradigm that enables a large number of resource-limited nodes to collaboratively train a model without data sharing. The non-independent-and-identically-distributed (non-i.i.d.) data samples invoke discrepancies between the global and local objectiv...
Saved in:
Published in | IEEE transactions on network science and engineering Vol. 9; no. 5; pp. 3099 - 3111 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
Piscataway
IEEE
01.09.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Federated Learning (FL) is a distributed learning paradigm that enables a large number of resource-limited nodes to collaboratively train a model without data sharing. The non-independent-and-identically-distributed (non-i.i.d.) data samples invoke discrepancies between the global and local objectives, making the FL model slow to converge. In this paper, we proposed Optimal Aggregation algorithm for better aggregation, which finds out the optimal subset of local updates of participating nodes in each global round, by identifying and excluding the adverse local updates via checking the relationship between the local gradient and the global gradient. Then, we proposed a P robabilistic N ode S election framework ( FedPNS ) to dynamically change the probability for each node to be selected based on the output of Optimal Aggregation . FedPNS can preferentially select nodes that propel faster model convergence. The convergence rate improvement of FedPNS over the commonly adopted Federated Averaging ( FedAvg ) algorithm is analyzed theoretically. Experimental results demonstrate the effectiveness of FedPNS in accelerating the FL convergence rate, as compared to FedAvg with random node selection. |
---|---|
AbstractList | Federated Learning (FL) is a distributed learning paradigm that enables a large number of resource-limited nodes to collaboratively train a model without data sharing. The non-independent-and-identically-distributed (non-i.i.d.) data samples invoke discrepancies between the global and local objectives, making the FL model slow to converge. In this paper, we proposed Optimal Aggregation algorithm for better aggregation, which finds out the optimal subset of local updates of participating nodes in each global round, by identifying and excluding the adverse local updates via checking the relationship between the local gradient and the global gradient. Then, we proposed a P robabilistic N ode S election framework ( FedPNS ) to dynamically change the probability for each node to be selected based on the output of Optimal Aggregation . FedPNS can preferentially select nodes that propel faster model convergence. The convergence rate improvement of FedPNS over the commonly adopted Federated Averaging ( FedAvg ) algorithm is analyzed theoretically. Experimental results demonstrate the effectiveness of FedPNS in accelerating the FL convergence rate, as compared to FedAvg with random node selection. |
Author | Wu, Hongda Wang, Ping |
Author_xml | – sequence: 1 givenname: Hongda orcidid: 0000-0001-8244-928X surname: Wu fullname: Wu, Hongda email: hwu1226@cse.yorku.ca organization: Department of Electrical Engineering and Computer Science, Lassonde School of Engineering, York University, Toronto, ON, Canada – sequence: 2 givenname: Ping orcidid: 0000-0002-1599-5480 surname: Wang fullname: Wang, Ping email: pingw@yorku.ca organization: Department of Electrical Engineering and Computer Science, Lassonde School of Engineering, York University, Toronto, ON, Canada |
BookMark | eNp9kE1LAzEQhoMoWGt_gHgJeN6ar02ao_RDC6U9tIK3kM3Oli01qdlU8d-7S4sHD55mDu8zL_PcoEsfPCB0R8mQUqIfN8v1dMgIY0NOheRaX6Ae41xknOm3y25nKhNSq2s0aJodIYSykeSc99BqGUrAa9iDS3XweBO-bCzxzDYJIh4H_wlxC94BrkLEMygh2gQlXoCNvvZb3DLL4LP5fIInNtlbdFXZfQOD8-yj19l0M37JFqvn-fhpkTmmecqULEWRW1sRWpSSVgXT5SjPrea5EwBVaQthhSYOqORSaUZI5YgEmuccRiPF--jhdPcQw8cRmmR24Rh9W2mYooxIIihpU_SUcjE0TYTKHGL9buO3ocR06kynznTqzFldy6g_jKuT7eSkaOv9v-T9iawB4LdJK9o-oPgPrfF8SQ |
CODEN | ITNSD5 |
CitedBy_id | crossref_primary_10_1016_j_phycom_2023_102164 crossref_primary_10_1007_s12652_025_04958_4 crossref_primary_10_1109_TNSE_2025_3528982 crossref_primary_10_1109_TNSE_2023_3320123 crossref_primary_10_3390_s24206711 crossref_primary_10_1109_TMLCN_2023_3302811 crossref_primary_10_3934_era_2024079 crossref_primary_10_1016_j_future_2024_01_007 crossref_primary_10_1016_j_ins_2024_121057 crossref_primary_10_1109_JIOT_2023_3277463 crossref_primary_10_1016_j_comnet_2024_110248 crossref_primary_10_1109_JIOT_2023_3299573 crossref_primary_10_1109_ACCESS_2024_3413069 crossref_primary_10_1145_3638052 crossref_primary_10_3390_s24041342 crossref_primary_10_1109_OJCOMS_2024_3504852 crossref_primary_10_1109_JIOT_2024_3416943 crossref_primary_10_1109_TMC_2024_3504271 crossref_primary_10_1109_JSTSP_2023_3239189 crossref_primary_10_1109_TNSE_2024_3507273 crossref_primary_10_1109_JIOT_2024_3403082 crossref_primary_10_1109_TCSS_2022_3216802 crossref_primary_10_3390_electronics12091972 crossref_primary_10_1109_TSC_2023_3332102 crossref_primary_10_1109_TNSM_2023_3288738 crossref_primary_10_1016_j_comnet_2023_109678 crossref_primary_10_1109_OJCOMS_2024_3458088 crossref_primary_10_3390_s22020450 crossref_primary_10_1109_JIOT_2023_3320250 crossref_primary_10_1145_3708495 crossref_primary_10_3390_electronics14050954 crossref_primary_10_1016_j_comnet_2025_111223 crossref_primary_10_1016_j_pmcj_2024_101948 crossref_primary_10_1109_TMC_2023_3331906 crossref_primary_10_3390_computers13050118 crossref_primary_10_1109_TGCN_2024_3350735 crossref_primary_10_1109_TMC_2024_3365477 crossref_primary_10_1109_TII_2024_3431020 crossref_primary_10_1109_COMST_2023_3316615 crossref_primary_10_1109_TMC_2024_3504284 crossref_primary_10_1016_j_ymssp_2023_111068 crossref_primary_10_1145_3718363 crossref_primary_10_1109_ACCESS_2023_3323617 crossref_primary_10_1109_TWC_2024_3357208 crossref_primary_10_1109_TWC_2024_3487986 crossref_primary_10_1109_JIOT_2024_3376548 crossref_primary_10_3390_app14072720 crossref_primary_10_1109_TNSE_2024_3398795 crossref_primary_10_1007_s10462_024_10969_y crossref_primary_10_1109_TGRS_2024_3406817 crossref_primary_10_1109_JIOT_2024_3364247 crossref_primary_10_3390_math12203229 crossref_primary_10_1016_j_inffus_2024_102645 crossref_primary_10_1007_s42452_023_05498_2 crossref_primary_10_1088_1361_6501_acf7da crossref_primary_10_1109_IOTM_001_2300187 crossref_primary_10_1109_JIOT_2024_3481213 crossref_primary_10_1109_TMC_2023_3276900 crossref_primary_10_1109_JIOT_2024_3373822 crossref_primary_10_1016_j_dcan_2022_08_001 crossref_primary_10_1016_j_jksuci_2024_101912 crossref_primary_10_3390_fi15060209 crossref_primary_10_1016_j_compeleceng_2023_109067 crossref_primary_10_1016_j_neucom_2023_126897 crossref_primary_10_1016_j_engappai_2024_108840 crossref_primary_10_1109_LCOMM_2023_3312793 crossref_primary_10_3390_s23229226 crossref_primary_10_1016_j_iswa_2024_200359 crossref_primary_10_1109_JIOT_2023_3263598 crossref_primary_10_1109_TGCN_2023_3309657 crossref_primary_10_1145_3678181 |
Cites_doi | 10.1109/ICDCS.2019.00099 10.1109/TWC.2021.3052681 10.1109/JIOT.2016.2584538 10.1109/JPROC.2019.2941458 10.1109/JSAC.2019.2904348 10.1073/pnas.2024789118 10.21437/Interspeech.2014-274 10.1109/TCCN.2021.3084406 10.1109/TMC.2019.2908171 10.1109/MCE.2016.2590118 10.1109/ICASSP39728.2021.9413655 10.1109/INFOCOM41043.2020.9155494 10.1109/ICC.2019.8761315 10.1109/TWC.2020.3042530 10.1109/TWC.2020.3015671 10.1109/JSAC.2020.3036952 10.1109/TWC.2019.2946245 10.1109/MCOM.2018.1701095 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
DBID | 97E RIA RIE AAYXX CITATION 7SC 8FD JQ2 L7M L~C L~D |
DOI | 10.1109/TNSE.2022.3146399 |
DatabaseName | IEEE Xplore (IEEE) IEEE All-Society Periodicals Package (ASPP) 1998-Present IEEE Electronic Library (IEL) CrossRef Computer and Information Systems Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Computer and Information Systems Abstracts Technology Research Database Computer and Information Systems Abstracts – Academic Advanced Technologies Database with Aerospace ProQuest Computer Science Collection Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Computer and Information Systems Abstracts |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 2334-329X |
EndPage | 3111 |
ExternalDocumentID | 10_1109_TNSE_2022_3146399 9716797 |
Genre | orig-research |
GrantInformation_xml | – fundername: Natural Sciences and Engineering Research Council of Canada; NSERC grantid: RGPIN-2019-06375 funderid: 10.13039/501100000038 |
GroupedDBID | 0R~ 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABJNI ABQJQ ABVLG AGQYO AGSQL AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ EBS EJD IEDLZ IFIPE IPLJI JAVBF M43 OCL PQQKQ RIA RIE AAYXX CITATION 7SC 8FD JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c293t-76d4b5aaf01bd61fb29d855a935c4eefdab4a490ce163679200fc06e1553e8873 |
IEDL.DBID | RIE |
ISSN | 2327-4697 |
IngestDate | Mon Jun 30 09:54:38 EDT 2025 Thu Apr 24 23:04:07 EDT 2025 Tue Jul 01 03:10:44 EDT 2025 Wed Aug 27 02:29:12 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 5 |
Language | English |
License | https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/Crown.html |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c293t-76d4b5aaf01bd61fb29d855a935c4eefdab4a490ce163679200fc06e1553e8873 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0001-8244-928X 0000-0002-1599-5480 |
PQID | 2712060410 |
PQPubID | 2040409 |
PageCount | 13 |
ParticipantIDs | crossref_citationtrail_10_1109_TNSE_2022_3146399 proquest_journals_2712060410 crossref_primary_10_1109_TNSE_2022_3146399 ieee_primary_9716797 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2022-09-01 |
PublicationDateYYYYMMDD | 2022-09-01 |
PublicationDate_xml | – month: 09 year: 2022 text: 2022-09-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | Piscataway |
PublicationPlace_xml | – name: Piscataway |
PublicationTitle | IEEE transactions on network science and engineering |
PublicationTitleAbbrev | TNSE |
PublicationYear | 2022 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref13 ref12 ref15 Shamir (ref29) 2014 ref11 Stich (ref27) 2019 ref2 Li (ref26) 2020 ref1 ref17 ref16 ref19 McMahan (ref6) 2017 ref18 Zhang (ref4) 2015 ref23 Krizhevsky (ref31) ref25 Lin (ref14) 2018 ref22 ref21 ref28 Chen (ref24) 2020 Konenỳ (ref7) 2016 ref8 ref3 ref5 Li (ref9) 2020 Cho (ref20) 2020 LeCun (ref30) 2010 Zhao (ref10) 2018 |
References_xml | – ident: ref13 doi: 10.1109/ICDCS.2019.00099 – ident: ref19 doi: 10.1109/TWC.2021.3052681 – year: 2020 ident: ref24 article-title: Optimal client sampling for federated learning – volume-title: Proc. Int. Conf. Learn. Representations year: 2019 ident: ref27 article-title: Local SGD converges fast and communicates little – start-page: 1273 volume-title: Proc. Artif. Intell. Statist. Conf. year: 2017 ident: ref6 article-title: Communication-efficient learning of deep networks from decentralized data – start-page: 1000 volume-title: Proc. Int. Conf. Mach. Learn. year: 2014 ident: ref29 article-title: Communication-efficient distributed optimization using an approximate Newton-type method – ident: ref1 doi: 10.1109/JIOT.2016.2584538 – year: 2010 ident: ref30 article-title: MNIST handwritten digit database – ident: ref8 doi: 10.1109/JPROC.2019.2941458 – year: 2018 ident: ref10 article-title: Federated learning with non-IID data – volume-title: Proc. Int. Conf. Learn. Representations year: 2018 ident: ref14 article-title: Deep gradient compression: Reducing the communication bandwidth for distributed training – ident: ref12 doi: 10.1109/JSAC.2019.2904348 – year: 2016 ident: ref7 article-title: Federated learning: Strategies for improving communication efficiency – ident: ref21 doi: 10.1073/pnas.2024789118 – ident: ref15 doi: 10.21437/Interspeech.2014-274 – ident: ref17 doi: 10.1109/TCCN.2021.3084406 – ident: ref5 doi: 10.1109/TMC.2019.2908171 – ident: ref31 article-title: CIFAR-10 (Canadian institute for advanced research) – ident: ref3 doi: 10.1109/MCE.2016.2590118 – ident: ref25 doi: 10.1109/ICASSP39728.2021.9413655 – ident: ref11 doi: 10.1109/INFOCOM41043.2020.9155494 – ident: ref18 doi: 10.1109/ICC.2019.8761315 – ident: ref22 doi: 10.1109/TWC.2020.3042530 – ident: ref23 doi: 10.1109/TWC.2020.3015671 – start-page: 308 volume-title: Proc. IEEE Symp. Ser.-Oriented Syst. Eng. year: 2015 ident: ref4 article-title: Testing location-based function services for mobile applications – start-page: 429 volume-title: Proc. Mach. Learn. Syst. year: 2020 ident: ref9 article-title: Federated optimization in heterogeneous networks – ident: ref28 doi: 10.1109/JSAC.2020.3036952 – year: 2020 ident: ref20 article-title: Client selection in federated learning: Convergence analysis and power-of-choice selection strategies – ident: ref16 doi: 10.1109/TWC.2019.2946245 – ident: ref2 doi: 10.1109/MCOM.2018.1701095 – volume-title: Proc. Int. Conf. Learn. Representations year: 2020 ident: ref26 article-title: On the convergence of fedavg on non-IID data |
SSID | ssj0001286333 |
Score | 2.573432 |
Snippet | Federated Learning (FL) is a distributed learning paradigm that enables a large number of resource-limited nodes to collaboratively train a model without data... |
SourceID | proquest crossref ieee |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 3099 |
SubjectTerms | Agglomeration Algorithms Computational modeling Convergence Data models Data retrieval Elections fast convergence Federated learning mobile edge computing node selection Nodes Predictive models Probabilistic logic Servers Training |
Title | Node Selection Toward Faster Convergence for Federated Learning on Non-IID Data |
URI | https://ieeexplore.ieee.org/document/9716797 https://www.proquest.com/docview/2712060410 |
Volume | 9 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV07T8MwED61nWDgjSgU5IEJkTYP5-ER9aEWqWWglbpFjn1hALUI0oVfz9lJq_IQYstwjk53tu_hu-8Arr1IGYiPwAmzhAIUHuWODLzIIesquJ9LEaNJDYwn0XDG7-fhvAa3m14YRLTFZ9g2n_YtXy_VyqTKOgbuKBZxHeoUuJW9Wlv5lCQKgqB6uPRc0ZlOHvsUAPo-xaXcGOIvpsfOUvlxAVurMtiH8Zqfspjkub0qsrb6-AbV-F-GD2Cvci_ZXbkfDqGGiyPY3QIdPIaHyVIje7Tzb0gpbGoLZ9lAGsgE1jVV6LYhExn5s2xgwCbIH9WsQmJ9YrRmslw4o1GP9WQhT2A26E-7Q6caq-Aosu2FE0eaZ6GUuetlOvLyzBc6CUMpglBxxFzLjEsuXIXkqxH7dI5y5UZoJgwh3UnBKTQWywWeAeMSE0EK9WOteIJekvGQqAIpZC5irZvgriWeqgpz3Iy-eElt7OGK1CgpNUpKKyU14Waz5LUE3PiL-NgIfUNYybsJrbVa0-pIvqd-7PkGKchzz39fdQE75t9lAVkLGsXbCi_J4yiyK7vVPgG64NGf |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV07T8MwED7xGICBN6I8PTAhUuLEeXhEQNUCDQNFYosc-8IAahGkC7-es5NWvITYMvgUy5_te_juO4AjHmtL8RF6UZGSgyLi0lMhjz3SrlIEpZIJ2tBAP4u79-LqIXqYgZNpLQwiuuQzbNtP95ZvRnpsQ2Wnlu4okckszJPej3hdrfUpopLGYRg2T5fcl6eD7O6SXMAgIM9UWFX8Rfm4bio_rmCnVzor0J_MqE4neWqPq6Kt37-RNf53yquw3BiY7KzeEWswg8N1WPpEO7gBt9nIILtzHXAIFjZwqbOsoyxpAju3eeiuJBMZWbSsY-kmyCI1rOFifWQkk42GXq93wS5UpTbhvnM5OO96TWMFT5N2r7wkNqKIlCp9XpiYl0UgTRpFSoaRFoilUYVQQvoayVqj6dNJKrUfo-0xhHQrhVswNxwNcRuYUJhKgjRIjBYp8rQgZMiEVFKVMjGmBf5kxXPdsI7b5hfPufM-fJlbkHILUt6A1ILjqchLTbnx1-ANu-jTgc16t2BvAmveHMq3PEh4YLmCuL_zu9QhLHQH_Zv8ppdd78Ki_U-dTrYHc9XrGPfJ_qiKA7ftPgDUPtTo |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Node+Selection+Toward+Faster+Convergence+for+Federated+Learning+on+Non-IID+Data&rft.jtitle=IEEE+transactions+on+network+science+and+engineering&rft.au=Wu%2C+Hongda&rft.au=Wang%2C+Ping&rft.date=2022-09-01&rft.pub=The+Institute+of+Electrical+and+Electronics+Engineers%2C+Inc.+%28IEEE%29&rft.eissn=2334-329X&rft.volume=9&rft.issue=5&rft.spage=3099&rft_id=info:doi/10.1109%2FTNSE.2022.3146399&rft.externalDBID=NO_FULL_TEXT |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2327-4697&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2327-4697&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2327-4697&client=summon |