An Explanation of the LSTM Model Used for DDoS Attacks Classification

With the rise of DDoS attacks, several machine learning-based attack detection models have been used to mitigate malicious behavioral attacks. Understanding how machine learning models work is not trivial. This is particularly true for complex and nonlinear models, such as deep learning models that...

Full description

Saved in:
Bibliographic Details
Published inApplied sciences Vol. 13; no. 15; p. 8820
Main Authors Bashaiwth, Abdulmuneem, Binsalleeh, Hamad, AsSadhan, Basil
Format Journal Article
LanguageEnglish
Published Basel MDPI AG 01.08.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:With the rise of DDoS attacks, several machine learning-based attack detection models have been used to mitigate malicious behavioral attacks. Understanding how machine learning models work is not trivial. This is particularly true for complex and nonlinear models, such as deep learning models that have high accuracy. The struggle to explain these models creates a tension between accuracy and explanation. Recently, different methods have been used to explain deep learning models and address ambiguity issues. In this paper, we utilize the LSTM model to classify DDoS attacks. We then investigate the explanation of LSTM using LIME, SHAP, Anchor, and LORE methods. Predictions of 17 DDoS attacks are explained by these methods, where common explanations are obtained for each class. We also use the output of the explanation methods to extract intrinsic features needed to differentiate DDoS attacks. Our results demonstrate 51 intrinsic features to classify attacks. We finally compare the explanation methods and evaluate them using descriptive accuracy (DA) and descriptive sparsity (DS) metrics. The comparison and evaluation show that the explanation methods can explain the classification of DDoS attacks by capturing either the dominant contribution of input features in the prediction of the classifier or a set of features with high relevance.
ISSN:2076-3417
2076-3417
DOI:10.3390/app13158820