Face Friend-Safe Adversarial Example on Face Recognition System
Deep neural networks (DNNs) provide the excellent service on deep learning tasks such as image recognition, speech recognition, and pattern recognition. In the field of face recognition, researches using DNN have been carried out. However, face adversarial example is a serious threat in face recogni...
Saved in:
Published in | 2019 Eleventh International Conference on Ubiquitous and Future Networks (ICUFN) pp. 547 - 551 |
---|---|
Main Authors | , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
01.07.2019
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Deep neural networks (DNNs) provide the excellent service on deep learning tasks such as image recognition, speech recognition, and pattern recognition. In the field of face recognition, researches using DNN have been carried out. However, face adversarial example is a serious threat in face recognition system. Face adversarial example adding a little of noise to the original face image can cause the misrecognition in the face recognition system. For example, an attacker intentionally modifies a face image with small distortion, which could cause the face recognition system to misidentify another person. It also shows the possibility of wrong recognition by another person when it is modulated by several points on the face. However, the concept of face friend-safe adversarial example can be useful in a military situation where friend and enemy forces are mixed. In the face recognition field, face friend-safe adversarial example may be needed that is correctly recognized by a friend face recognition system and misidentified by an enemy face recognition system. In this paper, we propose a face friend-safe adversarial example targeting the FaceNet face recognition system. The proposed scheme generates a face friend-safe adversarial example that is misrecognized by a enemy face recognition system but is correctly recognized by friend face recognition system with minimum distortion. For experiment, we used VGGFace2 and Labeled Faces in the Wild (LFW) as a dataset and Tensorflow as a machine learning library. Experimental results show that the proposed method has a 92.2% attack succes rate and 91.4 % friend accuracy with only 64.22 distortion. |
---|---|
ISSN: | 2165-8536 |
DOI: | 10.1109/ICUFN.2019.8806124 |