Interpreting Attributions and Interactions of Adversarial Attacks
This paper aims to explain adversarial attacks in terms of how adversarial perturbations contribute to the attacking task. We estimate attributions of different image regions to the decrease of the attacking cost based on the Shapley value. We define and quantify interactions among adversarial pertu...
Saved in:
Main Authors | , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
16.08.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | This paper aims to explain adversarial attacks in terms of how adversarial
perturbations contribute to the attacking task. We estimate attributions of
different image regions to the decrease of the attacking cost based on the
Shapley value. We define and quantify interactions among adversarial
perturbation pixels, and decompose the entire perturbation map into relatively
independent perturbation components. The decomposition of the perturbation map
shows that adversarially-trained DNNs have more perturbation components in the
foreground than normally-trained DNNs. Moreover, compared to the
normally-trained DNN, the adversarially-trained DNN have more components which
mainly decrease the score of the true category. Above analyses provide new
insights into the understanding of adversarial attacks. |
---|---|
DOI: | 10.48550/arxiv.2108.06895 |