Leveraging Explainable AI for Actionable Insights in IoT Intrusion Detection
The rise of IoT networks has heightened the risk of cyber attacks, necessitating the development of robust detection methods. Although deep learning and complex models show promise in identifying sophisticated attacks, they face challenges related to explainability and actionable insights. In this i...
Saved in:
Published in | 2024 19th Annual System of Systems Engineering Conference (SoSE) pp. 92 - 97 |
---|---|
Main Authors | , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
23.06.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | The rise of IoT networks has heightened the risk of cyber attacks, necessitating the development of robust detection methods. Although deep learning and complex models show promise in identifying sophisticated attacks, they face challenges related to explainability and actionable insights. In this investigation, we explore and contrast various explainable AI techniques, including LIME, SHAP, and counterfactual explanations, that can be used to enhance the explainability of intrusion detection outcomes. Furthermore, we introduce a framework that utilizes counterfactual SHAP to not only provide explanations but also generate actionable insights for guiding appropriate actions or automating intrusion response systems. We validate the effectiveness of various models through meticulous analysis within the CICIoT2023 dataset. Additionally, we perform a comparative evaluation of our proposed framework against previous approaches, demonstrating its ability to produce actionable insights. |
---|---|
ISSN: | 2835-3161 |
DOI: | 10.1109/SOSE62659.2024.10620966 |