A Statistical Defense Approach for Detecting Adversarial Examples
Adversarial examples are maliciously modified inputs created to fool deep neural networks (DNN). The discovery of such inputs presents a major issue to the expansion of DNN-based solutions. Many researchers have already contributed to the topic, providing both cutting edge-attack techniques and vari...
Saved in:
Published in | arXiv.org |
---|---|
Main Authors | , , |
Format | Paper |
Language | English |
Published |
Ithaca
Cornell University Library, arXiv.org
26.08.2019
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Be the first to leave a comment!