pFedDef: Characterizing evasion attack transferability in federated learning
Federated learning jointly trains a model across multiple clients, leveraging information from all of them. However, client models are vulnerable to attacks during training and testing. We introduce the pFedDef library, which analyzes and addresses the issue of adversarial clients performing interna...
Saved in:
Published in | Software impacts Vol. 15; p. 100469 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Elsevier B.V
01.03.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Federated learning jointly trains a model across multiple clients, leveraging information from all of them. However, client models are vulnerable to attacks during training and testing. We introduce the pFedDef library, which analyzes and addresses the issue of adversarial clients performing internal evasion attacks at test time to deceive other clients. pFedDef characterizes the transferability of internal evasion attacks for different learning methods and analyzes the trade-off between model accuracy and robustness to these attacks. We show that personalized federated adversarial training increases relative robustness by 60% compared to federated adversarial training and performs well even under limited system resources.
•Federated learning allows multiple clients to jointly train a neural network.•Clients in federated learning can perform evasion attacks to other clients.•Federated adversarial training increases robustness against evasion attacks.•pFedDef library combines personalized federated learning with adversarial training.•Differences between client models due to personalization increases robustness. |
---|---|
ISSN: | 2665-9638 2665-9638 |
DOI: | 10.1016/j.simpa.2023.100469 |