BadCleaner: Defending Backdoor Attacks in Federated Learning via Attention-Based Multi-Teacher Distillation

As a privacy-preserving distributed learning paradigm, federated learning (FL) has been proven to be vulnerable to various attacks, among which backdoor attack is one of the toughest. In this attack, malicious users attempt to embed backdoor triggers into local models, resulting in the crafted input...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on dependable and secure computing Vol. 21; no. 5; pp. 4559 - 4573
Main Authors Zhang, Jiale, Zhu, Chengcheng, Ge, Chunpeng, Ma, Chuan, Zhao, Yanchao, Sun, Xiaobing, Chen, Bing
Format Journal Article
LanguageEnglish
Published Washington IEEE 01.09.2024
IEEE Computer Society
Subjects
Online AccessGet full text
ISSN1545-5971
1941-0018
DOI10.1109/TDSC.2024.3354049

Cover

Loading…
More Information
Summary:As a privacy-preserving distributed learning paradigm, federated learning (FL) has been proven to be vulnerable to various attacks, among which backdoor attack is one of the toughest. In this attack, malicious users attempt to embed backdoor triggers into local models, resulting in the crafted inputs being misclassified as the targeted labels. To address such attack, several defense mechanisms are proposed, but may lose the effectiveness due to the following drawbacks. First, current methods heavily rely on massive labeled clean data, which is an impractical setting in FL. Moreover, an in-avoidable performance degradation usually occurs in the defensive procedure. To alleviate such concerns, we propose BadCleaner , a lossless and efficient backdoor defense scheme via attention-based federated multi-teacher distillation. First, BadCleaner can effectively tune the backdoored joint model without performance degradation, by distilling the in-depth knowledge from multiple teachers with only a small part of unlabeled clean data. Second, to fully eliminate the hidden backdoor patterns, we present an attention transfer method to alleviate the attention of models to the trigger regions. The extensive evaluation demonstrates that BadCleaner can reduce the success rates of state-of-the-art backdoor attacks without compromising the model performance.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1545-5971
1941-0018
DOI:10.1109/TDSC.2024.3354049