Creating an Explainable Intrusion Detection System Using Self Organizing Maps
Modern Artificial Intelligence (AI) enabled Intrusion Detection Systems (IDS) are complex black boxes. This means that a security analyst will have little to no explanation or clarification on why an IDS model made a particular prediction. A potential solution to this problem is to research and deve...
Saved in:
Published in | 2022 IEEE Symposium Series on Computational Intelligence (SSCI) pp. 404 - 412 |
---|---|
Main Authors | , , , , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
04.12.2022
|
Subjects | |
Online Access | Get full text |
DOI | 10.1109/SSCI51031.2022.10022255 |
Cover
Summary: | Modern Artificial Intelligence (AI) enabled Intrusion Detection Systems (IDS) are complex black boxes. This means that a security analyst will have little to no explanation or clarification on why an IDS model made a particular prediction. A potential solution to this problem is to research and develop Explainable Intrusion Detection Systems (X-IDS) based on current capabilities in Explainable Artificial Intelligence (XAI). In this paper, we create a novel X-IDS architecture featuring a Self Organizing Map (SOM) that is capable of producing explanatory visualizations. We leverage SOM's explainability to create both global and local explanations. An analyst can use global explanations to get a general idea of how a particular IDS model computes predictions. Local explanations are generated for individual datapoints to explain why a certain prediction value was computed. Furthermore, our SOM based X-IDS was evaluated on both explanation generation and traditional accuracy tests using the NSL-KDD and the CIC-IDS-2017 datasets. This focus on explainability along with building an accurate IDS sets us apart from other studies. |
---|---|
DOI: | 10.1109/SSCI51031.2022.10022255 |