Adv-Plate Attack: Adversarially Perturbed Plate for License Plate Recognition System

Deep learning technology has been used to develop improved license plate recognition (LPR) systems. In particular, deep neural networks have brought significant improvements in the LPR system. However, deep neural networks are vulnerable to adversarial examples. In the existing LPR system, adversari...

Full description

Saved in:
Bibliographic Details
Published inJournal of sensors Vol. 2021; no. 1
Main Authors Kwon, Hyun, Baek, Jang-Woon
Format Journal Article
LanguageEnglish
Published New York Hindawi 01.11.2021
Hindawi Limited
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Deep learning technology has been used to develop improved license plate recognition (LPR) systems. In particular, deep neural networks have brought significant improvements in the LPR system. However, deep neural networks are vulnerable to adversarial examples. In the existing LPR system, adversarial examples study specific spots that are easily identifiable by humans or require human feedback. In this paper, we propose a method of generating adversarial examples in the license plate, which has no human feedback and is difficult to identify by humans. In the proposed method, adversarial noise is added only to the license plate among the entire image to create an adversarial example that is erroneously recognized by the LPR system without being identified by humans. Experiments were performed using the baza silka dataset, and TensorFlow was used as the machine learning library. When epsilon is 0.6 for the first type, and alpha and the iteration of the second type are 0.4 and 1000, respectively, the adversarial examples generated by the first and second type generation methods are reduced to 20% and 15% accuracy in the LPR system.
ISSN:1687-725X
1687-7268
DOI:10.1155/2021/6473833