Blockchain Assisted Decentralized Federated Learning (BLADE-FL): Performance Analysis and Resource Allocation

Federated learning (FL), as a distributed machine learning paradigm, promotes personal privacy by local data processing at each client. However, relying on a centralized server for model aggregation, standard FL is vulnerable to server malfunctions, untrustworthy servers, and external attacks. To ad...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on parallel and distributed systems Vol. 33; no. 10; pp. 2401 - 2415
Main Authors Li, Jun, Shao, Yumeng, Wei, Kang, Ding, Ming, Ma, Chuan, Shi, Long, Han, Zhu, Poor, H. Vincent
Format Journal Article
LanguageEnglish
Published New York IEEE 01.10.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Federated learning (FL), as a distributed machine learning paradigm, promotes personal privacy by local data processing at each client. However, relying on a centralized server for model aggregation, standard FL is vulnerable to server malfunctions, untrustworthy servers, and external attacks. To address these issues, we propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL). In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, aggregates its own model with received ones, and then competes to generate a block before its local training on the next round. We evaluate the learning performance of BLADE-FL, and develop an upper bound on the global loss function. Then we verify that this bound is convex with respect to the number of overall aggregation rounds <inline-formula><tex-math notation="LaTeX">K</tex-math> <mml:math><mml:mi>K</mml:mi></mml:math><inline-graphic xlink:href="shi-ieq1-3138848.gif"/> </inline-formula>, and optimize the computing resource allocation for minimizing the upper bound. We also note that there is a critical problem of training deficiency, caused by lazy clients who plagiarize others' trained models and add artificial noises to disguise their cheating behaviors. Focusing on this problem, we explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal <inline-formula><tex-math notation="LaTeX">K</tex-math> <mml:math><mml:mi>K</mml:mi></mml:math><inline-graphic xlink:href="shi-ieq2-3138848.gif"/> </inline-formula>, the learning parameters, and the proportion of lazy clients. Based on the MNIST and Fashion-MNIST datasets, we see that the experimental results are consistent with the analytical ones. To be specific, the gap between the developed upper bound and experimental results is lower than <inline-formula><tex-math notation="LaTeX">5\%</tex-math> <mml:math><mml:mrow><mml:mn>5</mml:mn><mml:mo>%</mml:mo></mml:mrow></mml:math><inline-graphic xlink:href="shi-ieq3-3138848.gif"/> </inline-formula>, and the optimized <inline-formula><tex-math notation="LaTeX">K</tex-math> <mml:math><mml:mi>K</mml:mi></mml:math><inline-graphic xlink:href="shi-ieq4-3138848.gif"/> </inline-formula> based on the upper bound can effectively minimize the loss function.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1045-9219
1558-2183
DOI:10.1109/TPDS.2021.3138848