Deep learning adversarial attacks and defenses on license plate recognition system

The breakthroughs in Machine learning and deep neural networks have revolutionized the handling of critical practical challenges, achieving state-of-the-art performance in various computer vision tasks. Notably, the application of deep neural networks in optical character recognition (OCR) has signi...

Full description

Saved in:
Bibliographic Details
Published inCluster computing Vol. 27; no. 8; pp. 11627 - 11644
Main Authors Vizcarra, Conrado, Alhamed, Shadan, Algosaibi, Abdulelah, Alnaeem, Mohammed, Aldalbahi, Adel, Aljaafari, Nura, Sawalmeh, Ahmad, Nazzal, Mahmoud, Khreishah, Abdallah, Alhumam, Abdulaziz, Anan, Muhammad
Format Journal Article
LanguageEnglish
Published New York Springer US 01.11.2024
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The breakthroughs in Machine learning and deep neural networks have revolutionized the handling of critical practical challenges, achieving state-of-the-art performance in various computer vision tasks. Notably, the application of deep neural networks in optical character recognition (OCR) has significantly enhanced the performance of OCR systems, making them a pivotal preprocessing component in text analysis pipelines for crucial applications such as license plate recognition (LPR) systems, where the efficiency of OCR is paramount. However, despite the advancements, the integration of deep neural networks in OCR introduces inherent security vulnerabilities, particularly susceptibility to adversarial examples. Adversarial examples in LPR systems are crafted by introducing perturbations to original license plate images, which can effectively compromise the integrity of the license plate recognition process, leading to erroneous license plate number identification. Given that the primary goal of OCR in this context is to accurately recognize license plate numbers, even a single misinterpreted character can significantly impact the overall performance of the LPR system. The vulnerability of LPR systems to adversarial attacks underscores the urgent need to address the security weaknesses inherited from deep neural networks. In response to these challenges, the exploration of alternative defense mechanisms, such as image denoising and in-painting, presents a compelling approach to bolstering the resilience of LPR systems against adversarial attacks. By prioritizing practical implementation and integration of image denoising and inpainting techniques align with the operational requirements of real-world LPR systems. These methods can be seamlessly integrated into existing pipelines, offering pragmatic and accessible means of enhancing security without imposing significant computational overhead. By embracing a multi-faceted approach that combines the strengths of traditional image processing techniques, the research endeavors to develop comprehensive and versatile defense strategies tailored to the specific vulnerabilities and requirements of LPR systems. This holistic approach aims to fortify LPR systems against adversarial threats, thereby fostering increased trust and reliability in the deployment of OCR and LPR technologies across various domains and applications.
ISSN:1386-7857
1573-7543
DOI:10.1007/s10586-024-04513-4