PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples
Adversarial perturbations of normal images are usually imperceptible to humans, but they can seriously confuse state-of-the-art machine learning models. What makes them so special in the eyes of image classifiers? In this paper, we show empirically that adversarial examples mainly lie in the low pro...
Saved in:
Main Authors | , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
30.10.2017
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Adversarial perturbations of normal images are usually imperceptible to
humans, but they can seriously confuse state-of-the-art machine learning
models. What makes them so special in the eyes of image classifiers? In this
paper, we show empirically that adversarial examples mainly lie in the low
probability regions of the training distribution, regardless of attack types
and targeted models. Using statistical hypothesis testing, we find that modern
neural density models are surprisingly good at detecting imperceptible image
perturbations. Based on this discovery, we devised PixelDefend, a new approach
that purifies a maliciously perturbed image by moving it back towards the
distribution seen in the training data. The purified image is then run through
an unmodified classifier, making our method agnostic to both the classifier and
the attacking method. As a result, PixelDefend can be used to protect already
deployed models and be combined with other model-specific defenses. Experiments
show that our method greatly improves resilience across a wide variety of
state-of-the-art attacking methods, increasing accuracy on the strongest attack
from 63% to 84% for Fashion MNIST and from 32% to 70% for CIFAR-10. |
---|---|
DOI: | 10.48550/arxiv.1710.10766 |