Boosting the Transferability of Adversarial Attacks with Global Momentum Initialization
Deep Neural Networks (DNNs) are vulnerable to adversarial examples, which are crafted by adding human-imperceptible perturbations to the benign inputs. Simultaneously, adversarial examples exhibit transferability across models, enabling practical black-box attacks. However, existing methods are stil...
Saved in:
Main Authors | , , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
21.11.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Deep Neural Networks (DNNs) are vulnerable to adversarial examples, which are
crafted by adding human-imperceptible perturbations to the benign inputs.
Simultaneously, adversarial examples exhibit transferability across models,
enabling practical black-box attacks. However, existing methods are still
incapable of achieving the desired transfer attack performance. In this work,
focusing on gradient optimization and consistency, we analyse the gradient
elimination phenomenon as well as the local momentum optimum dilemma. To tackle
these challenges, we introduce Global Momentum Initialization (GI), providing
global momentum knowledge to mitigate gradient elimination. Specifically, we
perform gradient pre-convergence before the attack and a global search during
this stage. GI seamlessly integrates with existing transfer methods,
significantly improving the success rate of transfer attacks by an average of
6.4% under various advanced defense mechanisms compared to the state-of-the-art
method. Ultimately, GI demonstrates strong transferability in both image and
video attack domains. Particularly, when attacking advanced defense methods in
the image domain, it achieves an average attack success rate of 95.4%. The code
is available at
$\href{https://github.com/Omenzychen/Global-Momentum-Initialization}{https://github.com/Omenzychen/Global-Momentum-Initialization}$. |
---|---|
DOI: | 10.48550/arxiv.2211.11236 |