Improving Transferability of Adversarial Examples with Adversaries Competition

Adversarial attacks deceive deep neural networks by adding subtle perturbations to benign examples, threatening various applications. However, traditional methods lack effective transferability to unknown black-box networks. Addressing this, we introduce a new method, Patch-based Transfer Attack (PT...

Full description

Saved in:
Bibliographic Details
Published inProceedings (IEEE International Conference on Multimedia and Expo) pp. 1 - 6
Main Authors Zhao, Shuai, Li, Tuo, Zhang, Boyuan, Zhai, Yang, Liu, Ziyi, Han, Yahong
Format Conference Proceeding
LanguageEnglish
Published IEEE 15.07.2024
Subjects
Online AccessGet full text
ISSN1945-788X
DOI10.1109/ICME57554.2024.10688238

Cover

More Information
Summary:Adversarial attacks deceive deep neural networks by adding subtle perturbations to benign examples, threatening various applications. However, traditional methods lack effective transferability to unknown black-box networks. Addressing this, we introduce a new method, Patch-based Transfer Attack (PTA), comprising Sensitive Area Localization (SAL) and Robust Perturbations Generation (RPG). SAL identifies highly activated regions, while RPG employs adversarial fine-tuning (AFT) and adversarial feature mixing (AFM) to target robust features, enhancing transferability. AFT adapts the surrogate model to current adversarial examples, and AFM combines adversarial and clean example features, compelling the focus on robust features. Our comprehensive tests demonstrate PTA's superior transferability, outperforming existing methods, especially when integrated with other techniques.
ISSN:1945-788X
DOI:10.1109/ICME57554.2024.10688238