A Robust Privacy-Preserving Federated Learning Model Against Model Poisoning Attacks

Although federated learning offers a level of privacy by aggregating user data without direct access, it remains inherently vulnerable to various attacks, including poisoning attacks where malicious actors submit gradients that reduce model accuracy. In addressing model poisoning attacks, existing d...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on information forensics and security Vol. 19; pp. 6693 - 6708
Main Authors Yazdinejad, Abbas, Dehghantanha, Ali, Karimipour, Hadis, Srivastava, Gautam, Parizi, Reza M.
Format Journal Article
LanguageEnglish
Published New York IEEE 2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN1556-6013
1556-6021
DOI10.1109/TIFS.2024.3420126

Cover

Loading…
More Information
Summary:Although federated learning offers a level of privacy by aggregating user data without direct access, it remains inherently vulnerable to various attacks, including poisoning attacks where malicious actors submit gradients that reduce model accuracy. In addressing model poisoning attacks, existing defense strategies primarily concentrate on detecting suspicious local gradients over plaintext. However, detecting non-independent and identically distributed encrypted gradients poses significant challenges for existing methods. Moreover, tackling computational complexity and communication overhead becomes crucial in privacy-preserving federated learning, particularly in the context of encrypted gradients. To address these concerns, we propose a robust privacy-preserving federated learning model resilient against model poisoning attacks without sacrificing accuracy. Our approach introduces an internal auditor that evaluates encrypted gradient similarity and distribution to differentiate between benign and malicious gradients, employing a Gaussian Mixture Model and Mahalanobis Distance for byzantine-tolerant aggregation. The proposed model utilizes Additive Homomorphic Encryption to ensure confidentiality while minimizing computational and communication overhead. Our model demonstrates superior performance in accuracy and privacy compared to existing strategies and encryption techniques, such as Fully Homomorphic Encryption and Two-Trapdoor Homomorphic Encryption. The proposed model effectively addresses the challenge of detecting maliciously encrypted non-independent and identically distributed gradients with low computational and communication overhead.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1556-6013
1556-6021
DOI:10.1109/TIFS.2024.3420126