Communication-Efficient Federated Learning for Wireless Edge Intelligence in IoT

The rapidly expanding number of Internet of Things (IoT) devices is generating huge quantities of data, but public concern over data privacy means users are apprehensive to send data to a central server for machine learning (ML) purposes. The easily changed behaviors of edge infrastructure that soft...

Full description

Saved in:
Bibliographic Details
Published inIEEE internet of things journal Vol. 7; no. 7; pp. 5986 - 5994
Main Authors Mills, Jed, Hu, Jia, Min, Geyong
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 01.07.2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The rapidly expanding number of Internet of Things (IoT) devices is generating huge quantities of data, but public concern over data privacy means users are apprehensive to send data to a central server for machine learning (ML) purposes. The easily changed behaviors of edge infrastructure that software-defined networking (SDN) provides makes it possible to collate IoT data at edge servers and gateways, where federated learning (FL) can be performed: building a central model without uploading data to the server. FedAvg is an FL algorithm which has been the subject of much study, however, it suffers from a large number of rounds to convergence with non-independent identically distributed (non-IID) client data sets and high communication costs per round. We propose adapting FedAvg to use a distributed form of Adam optimization, greatly reducing the number of rounds to convergence, along with the novel compression techniques, to produce communication-efficient FedAvg (CE-FedAvg). We perform extensive experiments with the MNIST/CIFAR-10 data sets, IID/non-IID client data, varying numbers of clients, client participation rates, and compression rates. These show that CE-FedAvg can converge to a target accuracy in up to <inline-formula> <tex-math notation="LaTeX">\mathbf {6\times } </tex-math></inline-formula> less rounds than similarly compressed FedAvg, while uploading up to <inline-formula> <tex-math notation="LaTeX">\mathbf {3\times } </tex-math></inline-formula> less data, and is more robust to aggressive compression. Experiments on an edge-computing-like testbed using Raspberry Pi clients also show that CE-FedAvg is able to reach a target accuracy in up to <inline-formula> <tex-math notation="LaTeX">\mathbf {1.7\times } </tex-math></inline-formula> less real time than FedAvg.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2327-4662
2327-4662
DOI:10.1109/JIOT.2019.2956615