A GNN-Based Adversarial Internet of Things Malware Detection Framework for Critical Infrastructure: Studying Gafgyt, Mirai, and Tsunami Campaigns
Significant advancement in deep learning (DL) has turned it into an integral part of robust approaches for addressing cybersecurity problems in both current and aging infrastructures. Control flow graphs (CFGs) have demonstrated their effectiveness as leading choices that result in high-performing c...
Saved in:
Published in | IEEE internet of things journal Vol. 11; no. 16; pp. 26826 - 26836 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
Piscataway
IEEE
15.08.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Significant advancement in deep learning (DL) has turned it into an integral part of robust approaches for addressing cybersecurity problems in both current and aging infrastructures. Control flow graphs (CFGs) have demonstrated their effectiveness as leading choices that result in high-performing classifiers among various data representations used by DL-based models. Recently, graph neural networks (GNNs) have made breakthroughs in the graph domain, and before long, they were jointly used with CFGs to train performant malware classifiers. However, graph-based adversarial attacks have caused suspicion about the predictions these graph-based malware classifiers make, and few studies have investigated detecting such attacks. Therefore, this article proposes a novel GNN-based adversarial detector for identifying adversarial CFGs with higher efficacy than the previous work. This adversarial detector is placed in a data pipeline before a GNN-based malware classifier. In this article, we solve the adversarial detection problem as an anomaly detection scenario and train the adversarial detector to learn the normal data distribution. Our GNN-based adversarial detector detects 98.96% of all adversarial CFGs, which is 1.17% higher than the previous method, with a 5.95% lower false positive rate (FPR). In the most hazardous category of the attack, where the attacker intends to render a malicious example as a benign input, we achieve a 4.85% boost compared to the previous competitors. |
---|---|
ISSN: | 2327-4662 2327-4662 |
DOI: | 10.1109/JIOT.2023.3298663 |