MaGAT: Mask-Guided Adversarial Training for Defending Face Editing GAN Models From Proactive Defense

The malicious misuse of face editing technology has endangered individual privacy and reputation. Adversarial attack-based proactive defense has been proposed to against it, which could prevent facial images from being successfully manipulated by face editing GAN models. However, the malicious manip...

Full description

Saved in:
Bibliographic Details
Published inIEEE signal processing letters Vol. 31; pp. 1 - 5
Main Authors Luo, Shengwei, Huang, Fangjun
Format Journal Article
LanguageEnglish
Published New York IEEE 01.01.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The malicious misuse of face editing technology has endangered individual privacy and reputation. Adversarial attack-based proactive defense has been proposed to against it, which could prevent facial images from being successfully manipulated by face editing GAN models. However, the malicious manipulators could defeat proactive defense through adversarial training. Therefore, studying the effectiveness of proactive defense against adversarially trained models is critical to realize reliable proactive defense actions in real world scenario. In this letter we propose a Mask-Guided Adversarial Training (MaGAT) framework to defend face editing GAN models from proactive defense, which aims at training GAN models to still output original desirable images even if the input images are adversarial examples. Extensive experiments demonstrate that the effectiveness of MaGAT still maintains on dataset unseen during training, which means it is potentially applicable for real world applications whose input images are unknown before.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1070-9908
1558-2361
DOI:10.1109/LSP.2024.3380466