IPatch: a remote adversarial patch
Applications such as autonomous vehicles and medical screening use deep learning models to localize and identify hundreds of objects in a single frame. In the past, it has been shown how an attacker can fool these models by placing an adversarial patch within a scene. However, these patches must be...
Saved in:
Published in | Cybersecurity (Singapore) Vol. 6; no. 1; pp. 18 - 19 |
---|---|
Main Author | |
Format | Journal Article |
Language | English |
Published |
Singapore
Springer Nature Singapore
01.12.2023
Springer Nature B.V SpringerOpen |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Applications such as autonomous vehicles and medical screening use deep learning models to localize and identify hundreds of objects in a single frame. In the past, it has been shown how an attacker can fool these models by placing an adversarial patch within a scene. However, these patches must be placed in the target location and do not explicitly alter the semantics elsewhere in the image. In this paper, we introduce a new type of adversarial patch which alters a model’s perception of an image’s semantics. These patches can be placed anywhere within an image to change the classification or semantics of locations far from the patch. We call this new class of adversarial examples ‘remote adversarial patches’ (RAP). We implement our own RAP called IPatch and perform an in-depth analysis on without pixel clipping on image segmentation RAP attacks using five state-of-the-art architectures with eight different encoders on the CamVid street view dataset. Moreover, we demonstrate that the attack can be extended to object recognition models with preliminary results on the popular YOLOv3 model. We found that the patch can change the classification of a remote target region with a success rate of up to 93% on average. |
---|---|
ISSN: | 2523-3246 2523-3246 |
DOI: | 10.1186/s42400-023-00145-0 |