Similarity-based Deep Neural Network to Detect Imperceptible Adversarial Attacks

Deep neural networks (DNN's) have become es-sential for solving diverse complex problems and have achieved considerable success in tackling computer vision tasks. How-ever, DNN's are vulnerable to human-imperceptible adversarial distortion/noise patterns that can detrimentally impact safet...

Full description

Saved in:
Bibliographic Details
Published in2022 IEEE Symposium Series on Computational Intelligence (SSCI) pp. 1028 - 1035
Main Authors Soares, Eduardo, Angelov, Plamen, Suri, Neeraj
Format Conference Proceeding
LanguageEnglish
Published IEEE 04.12.2022
Subjects
Online AccessGet full text
DOI10.1109/SSCI51031.2022.10022016

Cover

Loading…
More Information
Summary:Deep neural networks (DNN's) have become es-sential for solving diverse complex problems and have achieved considerable success in tackling computer vision tasks. How-ever, DNN's are vulnerable to human-imperceptible adversarial distortion/noise patterns that can detrimentally impact safety-critical applications such as autonomous driving. In this paper, we introduce a novel robust-by-design deep learning approach, Sim-DNN, that is able to detect adversarial attacks through its inner defense mechanism that considers the degree of similarity between new data samples and autonomously chosen prototypes. The approach benefits from the abrupt drop of the similarity score to detect concept changes caused by distorted/noise data when comparing their similarities against the set of prototypes. Due to the feed-forward prototype-based architecture of Sim-DNN, no re-training or adversarial training is required. In order to evaluate the robustness of the proposed method, we considered the recently introduced ImageNet-R dataset and different adversarial attack methods such as FGSM, PGD, and DDN. Different DNN's methods were also considered in the analysis. Results have shown that the proposed Sim-DNN is able to detect adversarial attacks with better performance than its mainstream competitors. Moreover, as no adversarial training is required by Sim-DNN, its performance on clean and robust images is more stable than its competitors which require an external defense mechanism to improve their robustness.
DOI:10.1109/SSCI51031.2022.10022016