LoMar: A Local Defense Against Poisoning Attack on Federated Learning

Federated learning (FL) provides a high efficient decentralized machine learning framework, where the training data remains distributed at remote clients in a network. Though FL enables a privacy-preserving mobile edge computing framework using IoT devices, recent studies have shown that this approa...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on dependable and secure computing Vol. 20; no. 1; pp. 437 - 450
Main Authors Li, Xingyu, Qu, Zhe, Zhao, Shangqing, Tang, Bo, Lu, Zhuo, Liu, Yao
Format Journal Article
LanguageEnglish
Published Washington IEEE 01.01.2023
IEEE Computer Society
Subjects
Online AccessGet full text
ISSN1545-5971
1941-0018
DOI10.1109/TDSC.2021.3135422

Cover

Loading…
More Information
Summary:Federated learning (FL) provides a high efficient decentralized machine learning framework, where the training data remains distributed at remote clients in a network. Though FL enables a privacy-preserving mobile edge computing framework using IoT devices, recent studies have shown that this approach is susceptible to poisoning attacks from the side of remote clients. To address the poisoning attacks on FL, we provide a two-phase defense algorithm called <inline-formula><tex-math notation="LaTeX">{\underline{Lo}cal\ \underline{Ma}licious\ Facto\underline{r}}</tex-math> <mml:math><mml:mrow><mml:munder><mml:mrow><mml:mi>L</mml:mi><mml:mi>o</mml:mi></mml:mrow><mml:mo>̲</mml:mo></mml:munder><mml:mi>c</mml:mi><mml:mi>a</mml:mi><mml:mi>l</mml:mi><mml:mspace width="4pt"/><mml:munder><mml:mrow><mml:mi>M</mml:mi><mml:mi>a</mml:mi></mml:mrow><mml:mo>̲</mml:mo></mml:munder><mml:mi>l</mml:mi><mml:mi>i</mml:mi><mml:mi>c</mml:mi><mml:mi>i</mml:mi><mml:mi>o</mml:mi><mml:mi>u</mml:mi><mml:mi>s</mml:mi><mml:mspace width="4pt"/><mml:mi>F</mml:mi><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>t</mml:mi><mml:mi>o</mml:mi><mml:munder><mml:mi>r</mml:mi><mml:mo>̲</mml:mo></mml:munder></mml:mrow></mml:math><inline-graphic xlink:href="tang-ieq1-3135422.gif"/> </inline-formula> (LoMar). In phase I, LoMar scores model updates from each remote client by measuring the relative distribution over their neighbors using a kernel density estimation method. In phase II, an optimal threshold is approximated to distinguish malicious and clean updates from a statistical perspective. Comprehensive experiments on four real-world datasets have been conducted, and the experimental results show that our defense strategy can effectively protect the FL system. Specifically, the defense performance on Amazon dataset under a label-flipping attack indicates that, compared with FG+Krum, LoMar increases the target label testing accuracy from <inline-formula><tex-math notation="LaTeX">96.0\%</tex-math> <mml:math><mml:mrow><mml:mn>96</mml:mn><mml:mo>.</mml:mo><mml:mn>0</mml:mn><mml:mo>%</mml:mo></mml:mrow></mml:math><inline-graphic xlink:href="tang-ieq2-3135422.gif"/> </inline-formula> to <inline-formula><tex-math notation="LaTeX">98.8\%</tex-math> <mml:math><mml:mrow><mml:mn>98</mml:mn><mml:mo>.</mml:mo><mml:mn>8</mml:mn><mml:mo>%</mml:mo></mml:mrow></mml:math><inline-graphic xlink:href="tang-ieq3-3135422.gif"/> </inline-formula>, and the overall averaged testing accuracy from <inline-formula><tex-math notation="LaTeX">90.1\%</tex-math> <mml:math><mml:mrow><mml:mn>90</mml:mn><mml:mo>.</mml:mo><mml:mn>1</mml:mn><mml:mo>%</mml:mo></mml:mrow></mml:math><inline-graphic xlink:href="tang-ieq4-3135422.gif"/> </inline-formula> to <inline-formula><tex-math notation="LaTeX">97.0\%</tex-math> <mml:math><mml:mrow><mml:mn>97</mml:mn><mml:mo>.</mml:mo><mml:mn>0</mml:mn><mml:mo>%</mml:mo></mml:mrow></mml:math><inline-graphic xlink:href="tang-ieq5-3135422.gif"/> </inline-formula>.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1545-5971
1941-0018
DOI:10.1109/TDSC.2021.3135422