On Impact of Adversarial Evasion Attacks on ML-based Android Malware Classifier Trained on Hybrid Features

Due to the widespread usage of Android-based smartphones in the current era, Android malware has become a significant concern. From the perspective of t he a dvances in machine learning-based approaches in the previous decade, the research community has shown a dominant interest in applying these to...

Full description

Saved in:
Bibliographic Details
Published in2022 14th International Conference on Software, Knowledge, Information Management and Applications (SKIMA) pp. 216 - 221
Main Authors Rafiq, Husnain, Aslam, Nauman, Issac, Biju, Randhawa, Rizwan Hamid
Format Conference Proceeding
LanguageEnglish
Published IEEE 02.12.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Due to the widespread usage of Android-based smartphones in the current era, Android malware has become a significant concern. From the perspective of t he a dvances in machine learning-based approaches in the previous decade, the research community has shown a dominant interest in applying these to counter Android malware. However, these ML-based classifiers are vulnerable to attacks. An attacker can deliberately fabricate the input application to force the classification algorithm to produce the desired output (evasion attack). In this study, first, w e propose HybridDroid, a n M L-based Android malware classifier trained o n hybrid features a nd optimized using the tree-based pipeline optimization technique (TPOT). Our experiments show that HybriDroid achieves a remarkable detection accuracy of up to 99.2% on a balanced excerpt of 36,000 malware and benign Android apps. Secondly, we explore the effectiveness of the proposed model in adversarial environments. We apply mimicry attacks, feature removal attacks and feature removal with injection attacks on HybriDroid. Our experiments reveal that ML-based malware classifiers are highly vulnerable to adversarial evasion attacks. Finally, we propose future directions to harden the security of ML-based Android malware classifiers in adversarial settings.
ISSN:2573-3214
DOI:10.1109/SKIMA57145.2022.10029504