An Explanation of the LSTM Model Used for DDoS Attacks Classification
With the rise of DDoS attacks, several machine learning-based attack detection models have been used to mitigate malicious behavioral attacks. Understanding how machine learning models work is not trivial. This is particularly true for complex and nonlinear models, such as deep learning models that...
Saved in:
Published in | Applied sciences Vol. 13; no. 15; p. 8820 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
Basel
MDPI AG
01.08.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | With the rise of DDoS attacks, several machine learning-based attack detection models have been used to mitigate malicious behavioral attacks. Understanding how machine learning models work is not trivial. This is particularly true for complex and nonlinear models, such as deep learning models that have high accuracy. The struggle to explain these models creates a tension between accuracy and explanation. Recently, different methods have been used to explain deep learning models and address ambiguity issues. In this paper, we utilize the LSTM model to classify DDoS attacks. We then investigate the explanation of LSTM using LIME, SHAP, Anchor, and LORE methods. Predictions of 17 DDoS attacks are explained by these methods, where common explanations are obtained for each class. We also use the output of the explanation methods to extract intrinsic features needed to differentiate DDoS attacks. Our results demonstrate 51 intrinsic features to classify attacks. We finally compare the explanation methods and evaluate them using descriptive accuracy (DA) and descriptive sparsity (DS) metrics. The comparison and evaluation show that the explanation methods can explain the classification of DDoS attacks by capturing either the dominant contribution of input features in the prediction of the classifier or a set of features with high relevance. |
---|---|
ISSN: | 2076-3417 2076-3417 |
DOI: | 10.3390/app13158820 |