Mitigation of poisoning attack in federated learning by using historical distance detection

The federated learning makes it possible for users to jointly train a model while keeps their data stored locally. It is an original privacy preserving machine learning framework. Meanwhile, there exists availability and integrity threats in the framework. There may be malicious clients pretending t...

Full description

Saved in:
Bibliographic Details
Published in2021 5th Cyber Security in Networking Conference (CSNet) pp. 10 - 17
Main Authors Shi, Zhaosen, Ding, Xuyang, Li, Fagen, Chen, Yingni, Li, Canran
Format Conference Proceeding
LanguageEnglish
Published IEEE 12.10.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The federated learning makes it possible for users to jointly train a model while keeps their data stored locally. It is an original privacy preserving machine learning framework. Meanwhile, there exists availability and integrity threats in the framework. There may be malicious clients pretending the benign ones to interfere global model owning to the local model's indifference at server aggregation side. This behavior is named as poisoning attack, which is generally divided into data poisoning and model poisoning. In this paper, we consider a federated learning scenario with one reliable center server and several clients, where existing malicious clients launching poisoning attack. In the scenario we explore the statistical relationship of Euclidean distance among models, including benign versus benign models and malicious versus benign models. Then based the findings and inspired by evolutionary clustering, we design a defense method to screen possible malicious agents and to mitigate their attack before every round's aggregation. The scheme is implemented at the center server side. In the method our mitigation scheme refers to the detection results of both current round and previous round. Lastly we demonstrate the effectiveness of our scheme through experiments of several different scenarios.
ISSN:2768-0029
DOI:10.1109/CSNet52717.2021.9614278