Adversarial machine learning in Network Intrusion Detection Systems

Adversarial examples are inputs to a machine learning system intentionally crafted by an attacker to fool the model into producing an incorrect output. These examples have achieved a great deal of success in several domains such as image recognition, speech recognition and spam detection. In this pa...

Full description

Saved in:
Bibliographic Details
Published inExpert systems with applications Vol. 186; p. 115782
Main Authors Alhajjar, Elie, Maxwell, Paul, Bastian, Nathaniel
Format Journal Article
LanguageEnglish
Published New York Elsevier Ltd 30.12.2021
Elsevier BV
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Adversarial examples are inputs to a machine learning system intentionally crafted by an attacker to fool the model into producing an incorrect output. These examples have achieved a great deal of success in several domains such as image recognition, speech recognition and spam detection. In this paper, we study the nature of the adversarial problem in Network Intrusion Detection Systems (NIDS). We focus on the attack perspective, which includes techniques to generate adversarial examples capable of evading a variety of machine learning models. More specifically, we explore the use of evolutionary computation (particle swarm optimization and genetic algorithm) and deep learning (generative adversarial networks) as tools for adversarial example generation. To assess the performance of these algorithms in evading a NIDS, we apply them to two publicly available data sets, namely the NSL-KDD and UNSW-NB15, and we contrast them to a baseline perturbation method: Monte Carlo simulation. The results show that our adversarial example generation techniques cause high misclassification rates in eleven different machine learning models, along with a voting classifier. Our work highlights the vulnerability of machine learning based NIDS in the face of adversarial perturbation. •Machine learning algorithms are not robust in unconstrained domains.•Evolutionary algorithms are able to generate successful adversarial examples.•Generative Adversarial Networks provide a rich source of fooling examples.•Network intrusion detection systems are vulnerable to maliciously crafted packets.
ISSN:0957-4174
1873-6793
DOI:10.1016/j.eswa.2021.115782