A decade of adversarial examples: a survey on the nature and understanding of neural network non-robustness

Adversarial examples, in the context of computer vision, are inputs deliberately crafted to deceive or mislead artificial neural networks. These examples exploit vulnerabilities in neural networks, resulting in minimal alterations to the original input that are imperceptible by humans but can signif...

Full description

Saved in:
Bibliographic Details
Published inKompʹûternaâ optika Vol. 49; no. 2; pp. 222 - 252
Main Authors Trusov, A.V., Limonova, E.E., Arlazarov, V.V.
Format Journal Article
LanguageEnglish
Published Samara National Research University 01.04.2025
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Adversarial examples, in the context of computer vision, are inputs deliberately crafted to deceive or mislead artificial neural networks. These examples exploit vulnerabilities in neural networks, resulting in minimal alterations to the original input that are imperceptible by humans but can significantly impact the network’s output. In this paper, we present a thorough survey of research on adversarial examples, with a primary focus on their impact on neural network classifiers. We closely examine the theoretical capabilities and limitations of artificial neural networks. After that, we explore the discovery and evolution of adversarial examples, starting from basic gradient-based techniques and progressing toward the recent trend of employing generative neural networks for this purpose. We discuss the limited effectiveness of existing countermeasures against adversarial examples. Furthermore, we emphasize that the adversarial examples originate the misalignment between human and neural network decision-making processes. That can be attributed to the current methodology for training neural networks. We also argue that the commonly used term “attack on neural networks” is misleading when discussing adversarial deep learning. Through this paper, our objective is to provide a comprehensive overview of adversarial examples and inspire further researchers to develop more robust neural networks. Such networks will align better with human decision-making processes and enhance the security and reliability of computer vision systems in practical applications.
ISSN:0134-2452
2412-6179
DOI:10.18287/2412-6179-CO-1494