Deep keypoints adversarial attack on face recognition systems
Face recognition systems based on deep learning have recently demonstrated an outstanding success in solving complex issues. Yet they turn out to be very vulnerable to attack. Therefore, the vulnerability of such systems has to be studied. An efficient attack strategy that deceives the face recognit...
Saved in:
Published in | Neurocomputing (Amsterdam) Vol. 621; p. 129295 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Elsevier B.V
07.03.2025
|
Subjects | |
Online Access | Get full text |
ISSN | 0925-2312 |
DOI | 10.1016/j.neucom.2024.129295 |
Cover
Loading…
Summary: | Face recognition systems based on deep learning have recently demonstrated an outstanding success in solving complex issues. Yet they turn out to be very vulnerable to attack. Therefore, the vulnerability of such systems has to be studied. An efficient attack strategy that deceives the face recognition system creates adversarial examples. The system may mistakenly reject a real subject as a result of such attack. Current methods for creating adversarial face images have poor perceptual quality and take too long to produce. In this paper, we introduce a novel Adversarial Attack named DKA2 that combines both geometry and intensity based attack categories. The attack consists of three main parts: keypoints detection, Geometrically keypoints Perturbation and adversarial mask Generation. Unlike other attacks which perturb every pixel in the image, our method perturbs only the salient regions of the face represented by the keypoints (2% of the image) and the automatic generated adversarial mask. Limiting the perturbed points minimizes the distortion caused by the attack and resulting images that seem natural. Besides, the suggested approach produces stronger adversarial examples that can avoid black-box face matchers with attack success rates as high as 97,03%.
•Robust attack on face recognition systems using intensity and geometry-based methods.•Perturbs over 68 landmarks for adversarial face attack production.•High success on black-box face recognition systems with LFW and CFP datasets. |
---|---|
ISSN: | 0925-2312 |
DOI: | 10.1016/j.neucom.2024.129295 |