A Fast Two-Stage Black-Box Deep Learning Network Attacking Method Based on Cross-Correlation

Deep learning networks are widely used in various systems that require classification. However, deep learning networks are vulnerable to adversarial attacks. The study on adversarial attacks plays an important role in defense. Black-box attacks require less knowledge about target models than white-b...

Full description

Saved in:
Bibliographic Details
Published inComputers, materials & continua Vol. 64; no. 1; pp. 623 - 635
Main Authors Li, Deyin, Cheng, Mingzhi, Yang, Yu, Lei, Min, Shen, Linfeng
Format Journal Article
LanguageEnglish
Published Henderson Tech Science Press 2020
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Deep learning networks are widely used in various systems that require classification. However, deep learning networks are vulnerable to adversarial attacks. The study on adversarial attacks plays an important role in defense. Black-box attacks require less knowledge about target models than white-box attacks do, which means black-box attacks are easier to launch and more valuable. However, the state-of-arts black-box attacks still suffer in low success rates and large visual distances between generative adversarial images and original images. This paper proposes a kind of fast black-box attack based on the cross-correlation (FBACC) method. The attack is carried out in two stages. In the first stage, an adversarial image, which would be missclassified as the target label, is generated by using gradient descending learning. By far the image may look a lot different than the original one. Then, in the second stage, visual quality keeps getting improved on the condition that the label keeps being missclassified. By using the cross-correlation method, the error of the smooth region is ignored, and the number of iterations is reduced. Compared with the proposed black-box adversarial attack methods, FBACC achieves a better fooling rate and fewer iterations. When attacking LeNet5 and AlexNet respectively, the fooling rates are 100% and 89.56%. When attacking them at the same time, the fooling rate is 69.78%. FBACC method also provides a new adversarial attack method for the study of defense against adversarial attacks.
ISSN:1546-2226
1546-2218
1546-2226
DOI:10.32604/cmc.2020.09800