Simulated Adversarial Attacks on Traffic Sign Recognition of Autonomous Vehicles

With the development and application of artificial intelligence (AI) technology, autonomous driving systems are gradually being applied on the road. However, people still have requirements for the safety and reliability of unmanned vehicles. Autonomous driving systems in today’s unmanned vehicles al...

Full description

Saved in:
Bibliographic Details
Published inEngineering proceedings Vol. 92; no. 1; p. 15
Main Authors Chu-Hsing Lin, Chao-Ting Yu, Yan-Ling Chen, Yo-Yu Lin, Hsin-Ta Chiao
Format Journal Article
LanguageEnglish
Published MDPI AG 01.04.2025
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:With the development and application of artificial intelligence (AI) technology, autonomous driving systems are gradually being applied on the road. However, people still have requirements for the safety and reliability of unmanned vehicles. Autonomous driving systems in today’s unmanned vehicles also have to respond to information security attacks. If they cannot defend against such attacks, traffic accidents might be caused, leaving passengers exposed to risks. Therefore, we investigated adversarial attacks on the traffic sign recognition of autonomous vehicles in this study. We used You Look Only Once (YOLO) to build a machine learning model for traffic sign recognition and simulated attacks on traffic signs. The simulated attacks included LED light strobes, color-light flash, and Gaussian noise. Regarding LED strobes and color-light flash, translucent images were used to overlay the original traffic sign images to simulate corresponding attack scenarios. In the Gaussian noise attack, Python 3.11.10 was used to add noise to the original image. Different attack methods interfered with the original machine learning model to a certain extent, hindering autonomous vehicles from recognizing traffic signs and detecting them accurately.
ISSN:2673-4591
DOI:10.3390/engproc2025092015