Generating Black-Box Adversarial Examples in Sparse Domain

Applications of machine learning (ML) models and convolutional neural networks (CNNs) have been rapidly increased. Although state-of-the-art CNNs provide high accuracy in many applications, recent investigations show that such networks are highly vulnerable to adversarial attacks. The black-box adve...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on emerging topics in computational intelligence Vol. 6; no. 4; pp. 795 - 804
Main Authors Zanddizari, Hadi, Zeinali, Behnam, Chang, J. Morris
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 01.08.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Applications of machine learning (ML) models and convolutional neural networks (CNNs) have been rapidly increased. Although state-of-the-art CNNs provide high accuracy in many applications, recent investigations show that such networks are highly vulnerable to adversarial attacks. The black-box adversarial attack is one type of attack that the attacker does not have any knowledge about the model or the training dataset, but it has some input data set and their labels. In this paper, we propose a novel approach to generate a black-box attack in sparse domain whereas the most important information of an image can be observed. Our investigation shows that large sparse (LaS) components play a critical role in the performance of image classifiers. Under this presumption, to generate adversarial example, we transfer an image into a sparse domain and put a threshold to choose only <inline-formula><tex-math notation="LaTeX">k</tex-math></inline-formula> LaS components. In contrast to the very recent works that randomly perturb <inline-formula><tex-math notation="LaTeX">k</tex-math></inline-formula> low frequency (LoF) components, we perturb <inline-formula><tex-math notation="LaTeX">k</tex-math></inline-formula> LaS components either randomly (query-based) or in the direction of the most correlated sparse signal from a different class. We show that LaS components contain some middle or higher frequency components information which leads fooling image classifiers with a fewer number of queries. We demonstrate the effectiveness of this approach by fooling six state-of-the-art image classifiers, the TensorFlow Lite (TFLite) model of Google Cloud Vision platform, and YOLOv5 model as an object detection algorithm. Mean squared error (MSE) and peak signal to noise ratio (PSNR) are used as quality metrics. We also present a theoretical proof to connect these metrics to the level of perturbation in the sparse domain.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2471-285X
2471-285X
DOI:10.1109/TETCI.2021.3122467