Quantized Distributed Federated Learning for Industrial Internet of Things

Federated learning (FL) enables multiple devices to collaboratively train a shared machine learning (ML) model while keeping all the local data private, which is a crucial enabler to implement artificial intelligence (AI) at the edge of the Industrial Internet of Things (IIoT) scenario. Distributed...

Full description

Saved in:
Bibliographic Details
Published inIEEE internet of things journal Vol. 10; no. 4; pp. 3027 - 3036
Main Authors Ma, Teng, Wang, Haibo, Li, Chong
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 15.02.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Federated learning (FL) enables multiple devices to collaboratively train a shared machine learning (ML) model while keeping all the local data private, which is a crucial enabler to implement artificial intelligence (AI) at the edge of the Industrial Internet of Things (IIoT) scenario. Distributed FL (DFL) based on Device-to-Device (D2D) communications can solve the single point of failure and scaling issue of centralized FL, but subject to the communication resource limitation of D2D links. Thus, it is crucial to reduce the data transmission volume of FL models between devices. In this article, we propose a quantization-based DFL (Q-DFL) mechanism in a D2D network and prove its convergence. Q-DFL contains two phases: 1) in phase I, a local model is trained with the stochastic gradient descent (SGD) algorithm on each IIoT device and then exchanges the quantified model parameters between neighboring nodes and 2) in phase II, a quantitative consensus mechanism is designed to ensure the local models converge to the same global model. We also propose an adaptive stopping mechanism and a synchronization protocol to fulfill the phase transition from phase I to phase II. Simulation results reveal that with Q-DFL, a 1-bit quantizer can be employed without affecting the model convergence at the price of slight accuracy reduction, which achieves significant transmission bandwidth saving. Further simulation of Q-DFL for the MobileNet model is fulfilled with different quantization bit levels, which reveals their performance tradeoff among the system information flow consumption, the system time delay, and the system energy cost.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2327-4662
2327-4662
DOI:10.1109/JIOT.2021.3139772