Adversarial attacks through architectures and spectra in face recognition
•We propose a novel adversarial attack on face images captured in different spectra.•The attacks are transposed across different DNN architectures and different spectra.•The attacks are composed by a FGSM attack step and a noise transfer step.•The attacks are performed on two different dataset: VIS-...
Saved in:
Published in | Pattern recognition letters Vol. 147; pp. 55 - 62 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Amsterdam
Elsevier B.V
01.07.2021
Elsevier Science Ltd |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | •We propose a novel adversarial attack on face images captured in different spectra.•The attacks are transposed across different DNN architectures and different spectra.•The attacks are composed by a FGSM attack step and a noise transfer step.•The attacks are performed on two different dataset: VIS-NIR and ViS-TH.•The results show that the DNN architectures have different responses to the attack.
The ability of Deep Neural Networks (DNNs) to make fast predictions with high accuracy made them very popular in real-time applications. DNNs are nowadays in use for secure access to services or mobile devices. However, as DNNs use increased, at the same time attack techniques are born to “break” them. This paper presents a particular way to fool DNNs by moving from one spectrum to another one. The application field we explore is face recognition. The attack is first built on a trained Face DNN on Visible, Near Infrared or Thermal images, then transposed to another spectrum to fool another DNN. The attacks performed are based on the Fast Gradient Sign Method with the aim to misclassify the subject knowing the DNN to attack (White-Box Attack) but without knowing the DNN on which the attack will be transposed (Black-Box Attack). Results show that this cross-spectral attack is able to fool the most popular DNN architectures. In worst cases the DNN becomes useless to perform face recognition after the attack. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 0167-8655 1872-7344 |
DOI: | 10.1016/j.patrec.2021.04.004 |