Crafting an Adversarial Example in the DNN Representation Space by Minimizing the Distance from the Decision Boundary

Although deep neural networks (DNNs) achieve state-of-the-art performances in a wide range of machine learning (ML) applications, they are vulnerable: when small intentional perturbations are added to inputs, the network would misclassify them with high confidence. This phenomenon attracts broad att...

Full description

Saved in:
Bibliographic Details
Published in2021 IEEE Data Science and Learning Workshop (DSLW) pp. 1 - 8
Main Authors Li, Li, Doroslovacki, Milos, Loew, Murray H.
Format Conference Proceeding
LanguageEnglish
Published IEEE 05.06.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Although deep neural networks (DNNs) achieve state-of-the-art performances in a wide range of machine learning (ML) applications, they are vulnerable: when small intentional perturbations are added to inputs, the network would misclassify them with high confidence. This phenomenon attracts broad attention because it is a security issue. In this paper, we study the geometric properties of the decision boundaries in the representation space of DNNs and propose novel adversarial approaches by moving the representations of the inputs toward the decision boundaries and thus change the predictions of the DNN. Our experimental results show that the proposed algorithms are on par or better than state-of-the-art adversarial approaches in terms of the magnitude of perturbation and computation time.
DOI:10.1109/DSLW51110.2021.9523406