Empirical Evaluation on Robustness of Deep Convolutional Neural Networks Activation Functions Against Adversarial Perturbation

Recent research has shown that deep convolutional neural networks (DCNN) are vulnerable to several different types of attacks while the reasons of such vulnerability are still under investigation. For instance, the adversarial perturbations can conduct a slight change on a natural image to make the...

Full description

Saved in:
Bibliographic Details
Published in2018 Sixth International Symposium on Computing and Networking Workshops (CANDARW) pp. 223 - 227
Main Authors Su, Jiawei, Vargas, Danilo Vasconcellos, Sakurai, Kouichi
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.11.2018
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Recent research has shown that deep convolutional neural networks (DCNN) are vulnerable to several different types of attacks while the reasons of such vulnerability are still under investigation. For instance, the adversarial perturbations can conduct a slight change on a natural image to make the target DCNN make the wrong recognition, while the reasons that DCNN is sensitive to such small modification are divergent from one research to another. In this paper, we evaluate the robustness of two commonly used activation functions of DCNN, namely the sigmoid and ReLu, against the recently proposed low-dimensional one-pixel attack. We show that the choosing of activation functions can be an important factor that influences the robustness of DCNN. The results show that comparing with sigmoid, the ReLu non-linearity is more vulnerable which allows the low dimensional one-pixel attack exploit much higher success rate and confidence of launching the attack. The results give insights on designing new activation functions to enhance the security of DCNN.
DOI:10.1109/CANDARW.2018.00049