Evasion Attacks on Object Detection Models using Attack Transferability

Object detection stands as a fundamental component in numerous real-world applications, ranging from autonomous vehicles to security systems. As these technologies become increasingly embedded in our daily lives, ensuring the security and resilience of object detection models becomes critically esse...

Full description

Saved in:
Bibliographic Details
Published in2024 IEEE Secure Development Conference (SecDev) pp. 28 - 34
Main Authors E R R, Arjun, Kulkarni, Pavan, Govindarajulu, Yuvaraj, Shah, Harshit, Parmar, Manojkumar
Format Conference Proceeding
LanguageEnglish
Published IEEE 07.10.2024
Subjects
Online AccessGet full text
DOI10.1109/SecDev61143.2024.00009

Cover

More Information
Summary:Object detection stands as a fundamental component in numerous real-world applications, ranging from autonomous vehicles to security systems. As these technologies become increasingly embedded in our daily lives, ensuring the security and resilience of object detection models becomes critically essential. However, these models are vulnerable to adversarial attacks, where subtle alterations intentionally introduced into input data can mislead the model's predictions. This paper explores the susceptibility of YOLO V8 and TensorFlow Object Detection models, such as MobileNet and ResNet, to adversarial attacks using the concept of attack transferability. Utilizing the Fast Gradient Sign Method alongside a distinct classifier model, we generate adversarial examples and evaluate the impact on object detection systems. Our analysis exposes a significant decline in object detection model's performance in the presence of these adversarial examples, illustrating the transferability of attacks across different models. Our findings emphasize the critical necessity for robust defenses to safeguard object detection systems against transferable adversarial attacks.
DOI:10.1109/SecDev61143.2024.00009