Diffusion-based Adversarial Purification for Intrusion Detection

The escalating sophistication of cyberattacks has encouraged the integration of machine learning techniques in intrusion detection systems, but the rise of adversarial examples presents a significant challenge. These crafted perturbations mislead ML models, enabling attackers to evade detection or t...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Merzouk, Mohamed Amine, Beurier, Erwan, Yaich, Reda, Boulahia-Cuppens, Nora, Cuppens, Frédéric
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 25.06.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The escalating sophistication of cyberattacks has encouraged the integration of machine learning techniques in intrusion detection systems, but the rise of adversarial examples presents a significant challenge. These crafted perturbations mislead ML models, enabling attackers to evade detection or trigger false alerts. As a reaction, adversarial purification has emerged as a compelling solution, particularly with diffusion models showing promising results. However, their purification potential remains unexplored in the context of intrusion detection. This paper demonstrates the effectiveness of diffusion models in purifying adversarial examples in network intrusion detection. Through a comprehensive analysis of the diffusion parameters, we identify optimal configurations maximizing adversarial robustness with minimal impact on normal performance. Importantly, this study reveals insights into the relationship between diffusion noise and diffusion steps, representing a novel contribution to the field. Our experiments are carried out on two datasets and against 5 adversarial attacks. The implementation code is publicly available.
ISSN:2331-8422