Show Me Your Code! Kill Code Poisoning: A Lightweight Method Based on Code Naturalness

Neural code models (NCMs) have demonstrated extraordinary capabilities in code intelligence tasks. Meanwhile, the security of NCMs and NCMs-based systems has garnered increasing attention. In particular, NCMs are often trained on large-scale data from potentially untrustworthy sources, providing att...

Full description

Saved in:
Bibliographic Details
Published inProceedings / International Conference on Software Engineering pp. 2663 - 2675
Main Authors Sun, Weisong, Chen, Yuchen, Yuan, Mengzhe, Fang, Chunrong, Chen, Zhenpeng, Wang, Chong, Liu, Yang, Xu, Baowen, Chen, Zhenyu
Format Conference Proceeding
LanguageEnglish
Published IEEE 26.04.2025
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Neural code models (NCMs) have demonstrated extraordinary capabilities in code intelligence tasks. Meanwhile, the security of NCMs and NCMs-based systems has garnered increasing attention. In particular, NCMs are often trained on large-scale data from potentially untrustworthy sources, providing attackers with the opportunity to manipulate them by inserting crafted samples into the data. This type of attack is called a code poisoning attack (also known as a backdoor attack). It allows attackers to implant backdoors in NCMs and thus control model behavior, which poses a significant security threat. However, there is still a lack of effective techniques for detecting various complex code poisoning attacks. In this paper, we propose an innovative and lightweight technique for code poisoning detection named KillbadCode. KillbadCode is designed based on our insight that code poisoning disrupts the naturalness of code. Specifically, KillBADCODE first builds a code language model (CodeLM) on a lightweight n -gram language model. Then, given poisoned data, KillbadCode utilizes CodeLM to identify those tokens in (poisoned) code snippets that will make the code snippets more natural after being deleted as trigger tokens. Considering that the removal of some normal tokens in a single sample might also enhance code naturalness, leading to a high false positive rate (FPR), we aggregate the cumulative improvement of each token across all samples. Finally, KillbadCode purifies the poisoned data by removing all poisoned samples containing the identified trigger tokens. We conduct extensive experiments to evaluate the effectiveness and efficiency of KillbadCode, involving two types of advanced code poisoning attacks (a total of five poisoning strategies) and datasets from four representative code intelligence tasks. The experimental results demonstrate that across 20 code poisoning detection scenarios, KillbadCode achieves an average FPR of 8.30 % and an average Recall of 100 %, significantly outperforming four baselines. More importantly, KillBadCode is very efficient, with a minimum time consumption of only 5 minutes, and is 25 times faster than the best baseline on average.
ISSN:1558-1225
DOI:10.1109/ICSE55347.2025.00247