Creating an Explainable Intrusion Detection System Using Self Organizing Maps

Modern Artificial Intelligence (AI) enabled Intrusion Detection Systems (IDS) are complex black boxes. This means that a security analyst will have little to no explanation or clarification on why an IDS model made a particular prediction. A potential solution to this problem is to research and deve...

Full description

Saved in:
Bibliographic Details
Published in2022 IEEE Symposium Series on Computational Intelligence (SSCI) pp. 404 - 412
Main Authors Ables, Jesse, Kirby, Thomas, Anderson, William, Mittal, Sudip, Rahimi, Shahram, Banicescu, Ioana, Seale, Maria
Format Conference Proceeding
LanguageEnglish
Published IEEE 04.12.2022
Subjects
Online AccessGet full text
DOI10.1109/SSCI51031.2022.10022255

Cover

More Information
Summary:Modern Artificial Intelligence (AI) enabled Intrusion Detection Systems (IDS) are complex black boxes. This means that a security analyst will have little to no explanation or clarification on why an IDS model made a particular prediction. A potential solution to this problem is to research and develop Explainable Intrusion Detection Systems (X-IDS) based on current capabilities in Explainable Artificial Intelligence (XAI). In this paper, we create a novel X-IDS architecture featuring a Self Organizing Map (SOM) that is capable of producing explanatory visualizations. We leverage SOM's explainability to create both global and local explanations. An analyst can use global explanations to get a general idea of how a particular IDS model computes predictions. Local explanations are generated for individual datapoints to explain why a certain prediction value was computed. Furthermore, our SOM based X-IDS was evaluated on both explanation generation and traditional accuracy tests using the NSL-KDD and the CIC-IDS-2017 datasets. This focus on explainability along with building an accurate IDS sets us apart from other studies.
DOI:10.1109/SSCI51031.2022.10022255