Adv-Plate Attack: Adversarially Perturbed Plate for License Plate Recognition System
Deep learning technology has been used to develop improved license plate recognition (LPR) systems. In particular, deep neural networks have brought significant improvements in the LPR system. However, deep neural networks are vulnerable to adversarial examples. In the existing LPR system, adversari...
Saved in:
Published in | Journal of sensors Vol. 2021; no. 1 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
New York
Hindawi
01.11.2021
Hindawi Limited |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Deep learning technology has been used to develop improved license plate recognition (LPR) systems. In particular, deep neural networks have brought significant improvements in the LPR system. However, deep neural networks are vulnerable to adversarial examples. In the existing LPR system, adversarial examples study specific spots that are easily identifiable by humans or require human feedback. In this paper, we propose a method of generating adversarial examples in the license plate, which has no human feedback and is difficult to identify by humans. In the proposed method, adversarial noise is added only to the license plate among the entire image to create an adversarial example that is erroneously recognized by the LPR system without being identified by humans. Experiments were performed using the baza silka dataset, and TensorFlow was used as the machine learning library. When epsilon is 0.6 for the first type, and alpha and the iteration of the second type are 0.4 and 1000, respectively, the adversarial examples generated by the first and second type generation methods are reduced to 20% and 15% accuracy in the LPR system. |
---|---|
ISSN: | 1687-725X 1687-7268 |
DOI: | 10.1155/2021/6473833 |