Testing the Robustness of Deepfake Detectors

The term deepfake is used to denote an artificially generated or altered image using deep neural networks. Such methods are widely spread, with a focus on creating more realistic samples. However, their presence might induce problems, especially for public personas. Previous methods for detecting su...

Full description

Saved in:
Bibliographic Details
Published inIEEE International Conference on Communications (2003) pp. 1 - 6
Main Authors Radu, Andrei, Neacsu, Ana
Format Conference Proceeding
LanguageEnglish
Published IEEE 03.10.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The term deepfake is used to denote an artificially generated or altered image using deep neural networks. Such methods are widely spread, with a focus on creating more realistic samples. However, their presence might induce problems, especially for public personas. Previous methods for detecting such images consider only modifications done by natural sources in order to appear more realistic. However, many systems are agnostic to those changes. This work presents an overview over the effect of the adversarial perturbation which is maliciously added in order to fool deepfake detectors. In this work, we constructed a dataset of deepfake images by employing the latest generative models, while also finding a good trade-off between the accuracy of the detection systems and the magnitude of the adversarial noise added to the analysed images.
ISSN:1938-1883
DOI:10.1109/COMM62355.2024.10741489