Improving Transferability of Adversarial Examples with Adversaries Competition
Adversarial attacks deceive deep neural networks by adding subtle perturbations to benign examples, threatening various applications. However, traditional methods lack effective transferability to unknown black-box networks. Addressing this, we introduce a new method, Patch-based Transfer Attack (PT...
Saved in:
Published in | Proceedings (IEEE International Conference on Multimedia and Expo) pp. 1 - 6 |
---|---|
Main Authors | , , , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
15.07.2024
|
Subjects | |
Online Access | Get full text |
ISSN | 1945-788X |
DOI | 10.1109/ICME57554.2024.10688238 |
Cover
Summary: | Adversarial attacks deceive deep neural networks by adding subtle perturbations to benign examples, threatening various applications. However, traditional methods lack effective transferability to unknown black-box networks. Addressing this, we introduce a new method, Patch-based Transfer Attack (PTA), comprising Sensitive Area Localization (SAL) and Robust Perturbations Generation (RPG). SAL identifies highly activated regions, while RPG employs adversarial fine-tuning (AFT) and adversarial feature mixing (AFM) to target robust features, enhancing transferability. AFT adapts the surrogate model to current adversarial examples, and AFM combines adversarial and clean example features, compelling the focus on robust features. Our comprehensive tests demonstrate PTA's superior transferability, outperforming existing methods, especially when integrated with other techniques. |
---|---|
ISSN: | 1945-788X |
DOI: | 10.1109/ICME57554.2024.10688238 |