Training Robust Deep Neural Networks via Adversarial Noise Propagation

In practice, deep neural networks have been found to be vulnerable to various types of noise, such as adversarial examples and corruption. Various adversarial defense methods have accordingly been developed to improve adversarial robustness for deep models. However, simply training on data mixed wit...

Full description

Saved in:
Bibliographic Details
Main Authors Liu, Aishan, Liu, Xianglong, Zhang, Chongzhi, Yu, Hang, Liu, Qiang, Tao, Dacheng
Format Journal Article
LanguageEnglish
Published 19.09.2019
Subjects
Online AccessGet full text

Cover

Loading…