Training Robust Deep Neural Networks via Adversarial Noise Propagation
In practice, deep neural networks have been found to be vulnerable to various types of noise, such as adversarial examples and corruption. Various adversarial defense methods have accordingly been developed to improve adversarial robustness for deep models. However, simply training on data mixed wit...
Saved in:
Main Authors | , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
19.09.2019
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Be the first to leave a comment!