A Statistical Defense Approach for Detecting Adversarial Examples

Adversarial examples are maliciously modified inputs created to fool deep neural networks (DNN). The discovery of such inputs presents a major issue to the expansion of DNN-based solutions. Many researchers have already contributed to the topic, providing both cutting edge-attack techniques and vari...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Cennamo, Alessandro, Freeman, Ido, Kummert, Anton
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 26.08.2019
Subjects
Online AccessGet full text

Cover

Loading…