A Comprehensive Analysis on Adversarial Robustness of Spiking Neural Networks
In this era of machine learning models, their functionality is being threatened by adversarial attacks. In the face of this struggle for making artificial neural networks robust, finding a model, resilient to these attacks, is very important. In this work, we present, for the first time, a comprehen...
Saved in:
Main Authors | , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
07.05.2019
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | In this era of machine learning models, their functionality is being
threatened by adversarial attacks. In the face of this struggle for making
artificial neural networks robust, finding a model, resilient to these attacks,
is very important. In this work, we present, for the first time, a
comprehensive analysis of the behavior of more bio-plausible networks, namely
Spiking Neural Network (SNN) under state-of-the-art adversarial tests. We
perform a comparative study of the accuracy degradation between conventional
VGG-9 Artificial Neural Network (ANN) and equivalent spiking network with
CIFAR-10 dataset in both whitebox and blackbox setting for different types of
single-step and multi-step FGSM (Fast Gradient Sign Method) attacks. We
demonstrate that SNNs tend to show more resiliency compared to ANN under
black-box attack scenario. Additionally, we find that SNN robustness is largely
dependent on the corresponding training mechanism. We observe that SNNs trained
by spike-based backpropagation are more adversarially robust than the ones
obtained by ANN-to-SNN conversion rules in several whitebox and blackbox
scenarios. Finally, we also propose a simple, yet, effective framework for
crafting adversarial attacks from SNNs. Our results suggest that attacks
crafted from SNNs following our proposed method are much stronger than those
crafted from ANNs. |
---|---|
DOI: | 10.48550/arxiv.1905.02704 |