Towards Adversarially Superior Malware Detection Models: An Adversary Aware Proactive Approach using Adversarial Attacks and Defenses

The android ecosystem (smartphones, tablets, etc.) has grown manifold in the last decade. However, the exponential surge of android malware is threatening the ecosystem. Literature suggests that android malware can be detected using machine and deep learning classifiers; however, these detection mod...

Full description

Saved in:
Bibliographic Details
Published inInformation systems frontiers Vol. 25; no. 2; pp. 567 - 587
Main Authors Rathore, Hemant, Samavedhi, Adithya, Sahay, Sanjay K., Sewak, Mohit
Format Journal Article
LanguageEnglish
Published New York Springer US 01.04.2023
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The android ecosystem (smartphones, tablets, etc.) has grown manifold in the last decade. However, the exponential surge of android malware is threatening the ecosystem. Literature suggests that android malware can be detected using machine and deep learning classifiers; however, these detection models might be vulnerable to adversarial attacks. This work investigates the adversarial robustness of twenty-four diverse malware detection models developed using two features and twelve learning algorithms across four categories (machine learning, bagging classifiers, boosting classifiers, and neural network). We stepped into the adversary’s shoes and proposed two false-negative evasion attacks, namely GradAA and GreedAA , to expose vulnerabilities in the above detection models. The evasion attack agents transform malware applications into adversarial malware applications by adding minimum noise (maximum five perturbations) while maintaining the modified applications’ structural, syntactic, and behavioral integrity. These adversarial malware applications force misclassifications and are predicted as benign by the detection models. The evasion attacks achieved an average fooling rate of 83.34 % (GradAA) and 99.21 % (GreedAA) which reduced the average accuracy from 90.35 % to 55.22 % (GradAA) and 48.29 % (GreedAA) in twenty-four detection models. We also proposed two defense strategies, namely Adversarial Retraining and Correlation Distillation Retraining as countermeasures to protect detection models from adversarial attacks. The defense strategies slightly improved the detection accuracy but drastically enhanced the adversarial robustness of detection models. Finally, investigating the robustness of malware detection models against adversarial attacks is an essential step before their real-world deployment and can help in developing adversarially superior detection models.
ISSN:1387-3326
1572-9419
DOI:10.1007/s10796-022-10331-z