Accelerating Wireless Federated Learning with Adaptive Scheduling over Heterogeneous Devices

As the proliferation of sophisticated task models in 5G empowered digital twin, it yields significant demands on fast and accurate model training over resource-limited wireless networks. It is vital to investigate how to accelerate the training process based on the salient features of practical syst...

Full description

Saved in:
Bibliographic Details
Published inIEEE internet of things journal Vol. 11; no. 2; p. 1
Main Authors Li, Yixuan, Qin, Xiaoqi, Han, Kaifeng, Ma, Nan, Xu, Xiaodong, Zhang, Ping
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 15.01.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN2327-4662
2327-4662
DOI10.1109/JIOT.2023.3292494

Cover

More Information
Summary:As the proliferation of sophisticated task models in 5G empowered digital twin, it yields significant demands on fast and accurate model training over resource-limited wireless networks. It is vital to investigate how to accelerate the training process based on the salient features of practical systems, including heterogeneous data distributions and system resources both across devices and over time. To study the non-trivial coupling between participating device selection and their appropriate training parameters, we first characterize the dependency of convergence performance bound on system parameters, i.e., statistical structure of local data, mini-batch size and gradient quantization level. Based on the theoretical analysis, a training efficiency optimization problem is formulated subject to heterogeneous communication and computation capabilities among devices. To realize online control of training parameters, we propose an adaptive batch-size assisted device scheduling strategy, which prioritizes the selection of devices that offer good data utility and dynamically adjust their mini-batch sizes and gradient quantization levels adapting to network conditions. Simulation results demonstrate our proposed strategy can effectively speed up the training process as compared with benchmark algorithms.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2327-4662
2327-4662
DOI:10.1109/JIOT.2023.3292494