Detecting GAN-Generated Imagery Using Saturation Cues

Image forensics is an increasingly relevant problem, as it can potentially address online disinformation campaigns and mitigate problematic aspects of social media. Of particular interest, given its recent successes, is the detection of imagery produced by Generative Adversarial Networks (GANs), e.g...

Full description

Saved in:
Bibliographic Details
Published in2019 IEEE International Conference on Image Processing (ICIP) pp. 4584 - 4588
Main Authors McCloskey, Scott, Albright, Michael
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.09.2019
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Image forensics is an increasingly relevant problem, as it can potentially address online disinformation campaigns and mitigate problematic aspects of social media. Of particular interest, given its recent successes, is the detection of imagery produced by Generative Adversarial Networks (GANs), e.g. `deepfakes'. Leveraging large training sets and extensive computing resources, recent GANs can be trained to generate synthetic imagery which is (in some ways) indistinguishable from real imagery. We analyze the structure of the generating network of a popular GAN implementation [1], and show that the network's treatment of exposure is markedly different from a real camera. We further show that this cue can be used to distinguish GAN-generated imagery from camera imagery, including effective discrimination between GAN imagery and real camera images used to train the GAN.
ISSN:2381-8549
DOI:10.1109/ICIP.2019.8803661