Federated Split Learning for Distributed Intelligence with Resource-Constrained Devices

As a distributed machine learning paradigm, federated learning usually requires all edge devices to collaboratively train a large-size artificial intelligence model at local. However, this imposes challenges for these resource-constrained Internet of Things (IoT) devices. Moreover, the communication...

Full description

Saved in:
Bibliographic Details
Published in2024 IEEE International Conference on Communications Workshops (ICC Workshops) pp. 798 - 803
Main Authors Ao, Huiqing, Tian, Hui, Ni, Wanli
Format Conference Proceeding
LanguageEnglish
Published IEEE 09.06.2024
Subjects
Online AccessGet full text

Cover

Loading…
Abstract As a distributed machine learning paradigm, federated learning usually requires all edge devices to collaboratively train a large-size artificial intelligence model at local. However, this imposes challenges for these resource-constrained Internet of Things (IoT) devices. Moreover, the communication overhead between IoT devices and the base station is highly significant for the emerging big model-based tasks. In this paper, we propose a novel framework called federated split learning (FedSL), which considers the heterogeneity and resource scarcity of IoT devices. To reduce the training delay and energy consumption in resource-constrained wireless networks, we formulate a mixed-integer non-linear programming problem by jointly optimizing the power allocation, device scheduling and split layer selection. Then, we design an alternating optimization algorithm to solve the formulated problem with a low computational complexity. The simulation results demonstrate that the FedSL framework outperforms the current state-of-the-art benchmarks, highlighting the importance and superiority of device scheduling in resource-constrained IoT networks.
AbstractList As a distributed machine learning paradigm, federated learning usually requires all edge devices to collaboratively train a large-size artificial intelligence model at local. However, this imposes challenges for these resource-constrained Internet of Things (IoT) devices. Moreover, the communication overhead between IoT devices and the base station is highly significant for the emerging big model-based tasks. In this paper, we propose a novel framework called federated split learning (FedSL), which considers the heterogeneity and resource scarcity of IoT devices. To reduce the training delay and energy consumption in resource-constrained wireless networks, we formulate a mixed-integer non-linear programming problem by jointly optimizing the power allocation, device scheduling and split layer selection. Then, we design an alternating optimization algorithm to solve the formulated problem with a low computational complexity. The simulation results demonstrate that the FedSL framework outperforms the current state-of-the-art benchmarks, highlighting the importance and superiority of device scheduling in resource-constrained IoT networks.
Author Ni, Wanli
Tian, Hui
Ao, Huiqing
Author_xml – sequence: 1
  givenname: Huiqing
  surname: Ao
  fullname: Ao, Huiqing
  email: hqao@bupt.edu.cn
  organization: Beijing University of Posts and Telecommunications,State Key Laboratory of Networking and Switching Technology,Beijing,China,100876
– sequence: 2
  givenname: Hui
  surname: Tian
  fullname: Tian, Hui
  email: tianhui@bupt.edu.cn
  organization: Beijing University of Posts and Telecommunications,State Key Laboratory of Networking and Switching Technology,Beijing,China,100876
– sequence: 3
  givenname: Wanli
  surname: Ni
  fullname: Ni, Wanli
  email: niwanli@tsinghua.edu.cn
  organization: Tsinghua University,Beijing National Research Center for Information Science and Technology,Department of Electronic Engineering,Beijing,China,100084
BookMark eNqFjrFOwzAUAB8IJFrIHzB4Y0p4ju0QzykVlZgoUsfKpK_tg2BHtgvi7ykSzEw33A03hTMfPAHcSKykRHu76LpViG9pH8ZkrDGyqrHWlcRGGoX2BAp7Z1tlUKFGo05hUjdWl7XV8gKKlF4RUcm2bRs9gdWcNhRdpo1YjgNn8UguevY7sQ1RzDjlyC-HH73wmYaBd-R7Ep-c9-KJUjjEnsou-GPn2B-zGX1wT-kKzrduSFT88hKu5_fP3UPJRLQeI7-7-LX-e1b_6G_0I0r-
ContentType Conference Proceeding
DBID 6IE
6IL
CBEJK
RIE
RIL
DOI 10.1109/ICCWorkshops59551.2024.10615309
DatabaseName IEEE Electronic Library (IEL) Conference Proceedings
IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume
IEEE Xplore All Conference Proceedings
IEEE Electronic Library Online
IEEE Proceedings Order Plans (POP All) 1998-Present
DatabaseTitleList
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library Online
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISBN 9798350304053
EISSN 2694-2941
EndPage 803
ExternalDocumentID 10615309
Genre orig-research
GrantInformation_xml – fundername: Natural Science Foundation of Beijing, China
  grantid: L232052
  funderid: 10.13039/501100001810
GroupedDBID 6IE
6IF
6IH
6IK
6IL
6IN
AAJGR
ABLEC
ADZIZ
ALMA_UNASSIGNED_HOLDINGS
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CBEJK
CHZPO
IEGSK
IPLJI
OCL
RIE
RIL
ID FETCH-ieee_primary_106153093
IEDL.DBID RIE
IngestDate Wed Aug 21 05:36:53 EDT 2024
IsPeerReviewed false
IsScholarly false
Language English
LinkModel DirectLink
MergedId FETCHMERGED-ieee_primary_106153093
ParticipantIDs ieee_primary_10615309
PublicationCentury 2000
PublicationDate 2024-June-9
PublicationDateYYYYMMDD 2024-06-09
PublicationDate_xml – month: 06
  year: 2024
  text: 2024-June-9
  day: 09
PublicationDecade 2020
PublicationTitle 2024 IEEE International Conference on Communications Workshops (ICC Workshops)
PublicationTitleAbbrev ICCWORKSHOPS
PublicationYear 2024
Publisher IEEE
Publisher_xml – name: IEEE
SSID ssj0003188864
Score 3.8425322
Snippet As a distributed machine learning paradigm, federated learning usually requires all edge devices to collaboratively train a large-size artificial intelligence...
SourceID ieee
SourceType Publisher
StartPage 798
SubjectTerms Benchmark testing
Conferences
device scheduling
edge intelligence
Energy consumption
Federated split learning
Internet of Things
Processor scheduling
resource allocation
Simulation
Training
Wireless networks
Title Federated Split Learning for Distributed Intelligence with Resource-Constrained Devices
URI https://ieeexplore.ieee.org/document/10615309
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1NT4NAEJ1oD0YvfmH8qGYPJp6gAXZZ9kwlrYmNiRp7a6AsmphA08LFX-_MArUaTbwRIJsJLMzbt_PeAFxjhvK4TFybZo_N81zZipRcgmsRulJniU_a4ftJMHrmd1MxbcXqRgujtTbFZ9qhQ7OXn5XzmqiyAS1fhE9yvW2pVCPWWhMqODnDMOA7cNP6aA7GUUSM8-qtXKyEQmiAy0GPO90o3_qpmHQS78OkC6SpInl36ip15h8_PBr_HekBWF_KPfawzkmHsKWLI9jbMB08hpeY_CMQYmbsERFoxVqL1VeG-JUNyUiXemDh5fGGXScjwpZ1ZL9NfT5Ndwm8bajN38aCfnz7FI1sinO2aFwsZl2I_gn0irLQp8AQaOPn7_EwSDweZDINuZu5ec5TkbiJ1Gdg_TrE-R_nL2CXnriprFJ96FXLWl9iDq_SK_PuPgFs76Ak
link.rule.ids 310,311,783,787,792,793,799,27937,55086
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LT4NAEJ6Ymvi4-ML4qLoHE0_QQJfXmUpAW2Jijb0RKIsmJtBYuPjrnVmgVqOJNwJkM2E3O99-zPcNwDVmKIPbia7S6lF5nruqS0oukwvT0W2RJUPSDk8iK3jidzNz1orVpRZGCCGLz4RGl_JfflbOa6LKBnR8MYck19tEYO1YjVxrRang8nQci2_BTeukOQg9jzjn5Wu5WJouggM8EBpc68b51lFFJhR_D6IulKaO5E2rq1Sbf_xwafx3rPugfGn32MMqKx3AhigOYXfNdvAInn1ykECQmbFHxKAVa01WXxgiWDYiK13qgoWPwzXDTkaULevofpU6fcr-EvjaSMj9RoG-fzv1ApXijBeNj0XchTg8hl5RFuIEGEJt3AAM7liJwa3MTh2uZ3qe89RM9MQWp6D8OsTZH_evYDuYTsbxOIzuz2GHvr6ss3L70Kvea3GBGb1KL-U8fgL3XqNv
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=2024+IEEE+International+Conference+on+Communications+Workshops+%28ICC+Workshops%29&rft.atitle=Federated+Split+Learning+for+Distributed+Intelligence+with+Resource-Constrained+Devices&rft.au=Ao%2C+Huiqing&rft.au=Tian%2C+Hui&rft.au=Ni%2C+Wanli&rft.date=2024-06-09&rft.pub=IEEE&rft.eissn=2694-2941&rft.spage=798&rft.epage=803&rft_id=info:doi/10.1109%2FICCWorkshops59551.2024.10615309&rft.externalDocID=10615309