EAP: An effective black-box impersonation adversarial patch attack method on face recognition in the physical world
Face recognition models and systems based on deep neural networks are vulnerable to adversarial examples. However, existing attacks on face recognition are either impractical or ineffective for black-box impersonation attacks in the physical world. In this paper, we propose EAP, an effective black-b...
Saved in:
Published in | Neurocomputing (Amsterdam) Vol. 580; p. 127517 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Elsevier B.V
01.05.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Face recognition models and systems based on deep neural networks are vulnerable to adversarial examples. However, existing attacks on face recognition are either impractical or ineffective for black-box impersonation attacks in the physical world. In this paper, we propose EAP, an effective black-box impersonation attack method on face recognition in the physical world. EAP generates adversarial patches that can be printed by mobile and compact printers and attached to the source face to fool face recognition models and systems. To improve the transferability of adversarial patches, our approach incorporates random similarity transformations and image pyramid strategies, increasing input diversity. Furthermore, we introduce a meta-ensemble attack strategy that harnesses multiple pre-trained face models to extract common gradient features. We evaluate the effectiveness of EAP on two face datasets, using 16 state-of-the-art face recognition backbones, 9 heads, and 5 commercial systems. Moreover, we conduct physical experiments to substantiate its practicality. Our results demonstrate that EAP is capable of effectively executing impersonation attacks against state-of-the-art face recognition models and systems in both digital and physical environments. |
---|---|
ISSN: | 0925-2312 1872-8286 |
DOI: | 10.1016/j.neucom.2024.127517 |