Interpreting Attributions and Interactions of Adversarial Attacks

This paper aims to explain adversarial attacks in terms of how adversarial perturbations contribute to the attacking task. We estimate attributions of different image regions to the decrease of the attacking cost based on the Shapley value. We define and quantify interactions among adversarial pertu...

Full description

Saved in:
Bibliographic Details
Main Authors Wang, Xin, Lin, Shuyun, Zhang, Hao, Zhu, Yufei, Zhang, Quanshi
Format Journal Article
LanguageEnglish
Published 16.08.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper aims to explain adversarial attacks in terms of how adversarial perturbations contribute to the attacking task. We estimate attributions of different image regions to the decrease of the attacking cost based on the Shapley value. We define and quantify interactions among adversarial perturbation pixels, and decompose the entire perturbation map into relatively independent perturbation components. The decomposition of the perturbation map shows that adversarially-trained DNNs have more perturbation components in the foreground than normally-trained DNNs. Moreover, compared to the normally-trained DNN, the adversarially-trained DNN have more components which mainly decrease the score of the true category. Above analyses provide new insights into the understanding of adversarial attacks.
DOI:10.48550/arxiv.2108.06895