One Neuron to Fool Them All
Despite vast research in adversarial examples, the root causes of model susceptibility are not well understood. Instead of looking at attack-specific robustness, we propose a notion that evaluates the sensitivity of individual neurons in terms of how robust the model's output is to direct pertu...
Saved in:
Main Authors | , |
---|---|
Format | Journal Article |
Language | English |
Published |
20.03.2020
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Despite vast research in adversarial examples, the root causes of model
susceptibility are not well understood. Instead of looking at attack-specific
robustness, we propose a notion that evaluates the sensitivity of individual
neurons in terms of how robust the model's output is to direct perturbations of
that neuron's output. Analyzing models from this perspective reveals
distinctive characteristics of standard as well as adversarially-trained robust
models, and leads to several curious results. In our experiments on CIFAR-10
and ImageNet, we find that attacks using a loss function that targets just a
single sensitive neuron find adversarial examples nearly as effectively as ones
that target the full model. We analyze the properties of these sensitive
neurons to propose a regularization term that can help a model achieve
robustness to a variety of different perturbation constraints while maintaining
accuracy on natural data distributions. Code for all our experiments is
available at https://github.com/iamgroot42/sauron . |
---|---|
DOI: | 10.48550/arxiv.2003.09372 |