Affinity Backdoor Attacks in Point Clouds: A Novel Method Resilient to Corruption

As three-dimensional (3D) point cloud technology has advanced, the security concerns that surround point cloud classification models have garnered increasing attention. Attackers poison the training dataset of a model to mislead model classification, which is known as a backdoor attack. Considering...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on information forensics and security Vol. 20; pp. 5492 - 5504
Main Authors Gao, Tianchong, Xia, Zeyu, Pan, Yongming
Format Journal Article
LanguageEnglish
Published IEEE 2025
Subjects
Online AccessGet full text
ISSN1556-6013
1556-6021
DOI10.1109/TIFS.2025.3575274

Cover

Loading…
More Information
Summary:As three-dimensional (3D) point cloud technology has advanced, the security concerns that surround point cloud classification models have garnered increasing attention. Attackers poison the training dataset of a model to mislead model classification, which is known as a backdoor attack. Considering the uncertainty in environmental factors and point cloud sampling equipment, point cloud data may be subject to various types of corruption. While some existing classification models, e.g., PointNet and PointNet++, include corruption invariance in their designs, backdoor triggers are more vulnerable to corruption because of their small size. When corrupted, backdoor samples are more likely to be misclassified into their original categories than are clean samples. The reason is that the backdoor samples, which manipulate the model, are closer to the decision boundary than the clean samples are. To mitigate the detrimental effects of sample feature deviation, this paper proposes a novel backdoor attack method that is robust to corruption. We introduce the concept of affinity based on the high-level idea that the affinity category can facilitate the shift of sample features when corrupted. Afterward, we apply the adversarial attack method to distort the decision boundary to generate backdoor samples. The experimental results demonstrate that the proposed method achieves a high attack success rate and exhibits superior robustness against corruption compared with previous backdoor attack methods.
ISSN:1556-6013
1556-6021
DOI:10.1109/TIFS.2025.3575274