Leveraging Explainable AI for Actionable Insights in IoT Intrusion Detection

The rise of IoT networks has heightened the risk of cyber attacks, necessitating the development of robust detection methods. Although deep learning and complex models show promise in identifying sophisticated attacks, they face challenges related to explainability and actionable insights. In this i...

Full description

Saved in:
Bibliographic Details
Published in2024 19th Annual System of Systems Engineering Conference (SoSE) pp. 92 - 97
Main Authors Gyawali, Sohan, Huang, Jiaqi, Jiang, Yili
Format Conference Proceeding
LanguageEnglish
Published IEEE 23.06.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The rise of IoT networks has heightened the risk of cyber attacks, necessitating the development of robust detection methods. Although deep learning and complex models show promise in identifying sophisticated attacks, they face challenges related to explainability and actionable insights. In this investigation, we explore and contrast various explainable AI techniques, including LIME, SHAP, and counterfactual explanations, that can be used to enhance the explainability of intrusion detection outcomes. Furthermore, we introduce a framework that utilizes counterfactual SHAP to not only provide explanations but also generate actionable insights for guiding appropriate actions or automating intrusion response systems. We validate the effectiveness of various models through meticulous analysis within the CICIoT2023 dataset. Additionally, we perform a comparative evaluation of our proposed framework against previous approaches, demonstrating its ability to produce actionable insights.
ISSN:2835-3161
DOI:10.1109/SOSE62659.2024.10620966