DaBA: Data-free Backdoor Attack against Federated Learning via Malicious Server
Current research shows that the privacy of FL is threatened by an honest-but-curious server. However, existing research focus on privacy attacks against the malicious server while overlooking that it could also compromise the shared model's integrity by introducing poisoning attacks. In this wo...
Saved in:
Published in | 2023 International Conference on Image Processing, Computer Vision and Machine Learning (ICICML) pp. 882 - 890 |
---|---|
Main Authors | , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
03.11.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Current research shows that the privacy of FL is threatened by an honest-but-curious server. However, existing research focus on privacy attacks against the malicious server while overlooking that it could also compromise the shared model's integrity by introducing poisoning attacks. In this work, we propose a novel data-free backdoor attack (DaBA) against FL via malicious server to bridge the gap. Specifically, we utilize global model inversion to obtain a dummy dataset on the server side, then add backdoor triggers to a portion of the inputs in the dummy dataset and replace their labels with the target label, and finally retrain part of the global model on the poisoned dummy dataset. Our experimental results show that DaBA can achieve a high attack success rate on poisoned samples and high prediction accuracy on clean samples, which means the effectiveness and stealthiness of DaBA, respectively. For example, in the experiment of the MNIST dataset, DaBA can achieve a 99.6% attack success rate and 96.3% accuracy rate. We also discuss possible defense strategies against our attack. Our research reveals a significant security risk of FL. |
---|---|
DOI: | 10.1109/ICICML60161.2023.10424802 |