Deep Learning-Based Intrusion Detection With Adversaries

Deep neural networks have demonstrated their effectiveness in most machine learning tasks, with intrusion detection included. Unfortunately, recent research found that deep neural networks are vulnerable to adversarial examples in the image classification domain, i.e., they leave some opportunities...

Full description

Saved in:
Bibliographic Details
Published inIEEE access Vol. 6; pp. 38367 - 38384
Main Author Wang, Zheng
Format Journal Article
LanguageEnglish
Published United States IEEE 01.01.2018
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Deep neural networks have demonstrated their effectiveness in most machine learning tasks, with intrusion detection included. Unfortunately, recent research found that deep neural networks are vulnerable to adversarial examples in the image classification domain, i.e., they leave some opportunities for an attacker to fool the networks into misclassification by introducing imperceptible changes to the original pixels in an image. The vulnerability raises some concerns in applying deep neural networks in security-critical areas, such as intrusion detection. In this paper, we investigate the performances of the state-of-the-art attack algorithms against deep learning-based intrusion detection on the NSL-KDD data set. The vulnerabilities of neural networks employed by the intrusion detection systems are experimentally validated. The roles of individual features in generating adversarial examples are explored. Based on our findings, the feasibility and applicability of the attack methodologies are discussed.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2018.2854599