Improving Classification Results in Network Data Analysis using Interpretability Methods

Developing network intrusion detection and prevention systems usually leverage a rule-based approach, which is derived from rules defined by network security experts who can utilize logic from both low and high network layers. However, in recent times, machine learning methods have also achieved pro...

Full description

Saved in:
Bibliographic Details
Published in2022 International Conference on Software, Telecommunications and Computer Networks (SoftCOM) pp. 1 - 6
Main Authors Begusic, Domazoj, Walker, Luke Frederick, Krznaric, Sanja, Pintar, Damir
Format Conference Proceeding
LanguageEnglish
Published University of Split, FESB 22.09.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Developing network intrusion detection and prevention systems usually leverage a rule-based approach, which is derived from rules defined by network security experts who can utilize logic from both low and high network layers. However, in recent times, machine learning methods have also achieved promising results in developing Network Intrusion Detection Systems, and their popularity is steadily rising. Unfortunately, the usage of these machine learning methods in real-life problems has regularly proved that no good out-of-the-box solution exists for production or deployment. Also, due to the increasing volume and complexity of processed data that machine learning methods are faced with over time, improvements and adaptions are frequently required. As the problem at hand becomes more convoluted, so does the the nature of the applied solution. This complexity is further compounded by the fact that certain machine and deep learning methods intrinsically do not offer a way of understanding how they make decisions, effectively behaving like black boxes. All of this significantly lowers the understandability of implemented solutions in production environments that are already quite complex, which justifies the need of interpretability methods. While interpretability methods are commonly designed to be used by humans, in this paper we propose a way of improving a model's classification performance by applying data mining methods on explanation data generated by interpretability methods. The paper's main contribution is improving on a previously built network intrusion detection system through proposing an automated process of integrating explanations into original data with the purpose of improving the interpretability and score of the used machine learning model
ISSN:1847-358X