Utilizing Precise and Complete Code Context to Guide LLM in Automatic False Positive Mitigation
Static Application Security Testing(SAST) tools are crucial for early bug detection and code quality but often generate false positives that slow development. Automating false positive mitigation is thus essential for advancing SAST tools. Past efforts use static/dynamic analysis or machine learning...
Saved in:
Main Authors | , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
05.11.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Static Application Security Testing(SAST) tools are crucial for early bug
detection and code quality but often generate false positives that slow
development. Automating false positive mitigation is thus essential for
advancing SAST tools. Past efforts use static/dynamic analysis or machine
learning. The advent of Large Language Models, adept at understanding natural
language and code, offers promising ways to improve the accuracy and usability
of SAST tools. However, existing LLM-based methods need improvement in two key
areas: first, extracted code snippets related to warnings are often cluttered
with irrelevant control and data flows, reducing precision; second, critical
code contexts are often missing, leading to incomplete representations that can
mislead LLMs and cause inaccurate assessments. To ensure the use of precise and
complete code context, thereby avoiding misguidance and enabling LLMs to reach
accurate conclusions, we propose LLM4FPM. One of its core components is
eCPG-Slicer, which builds an extended code property graph and extracts
line-level, precise code context. Moreover, LLM4FPM incorporates FARF
algorithm, which builds a file reference graph and then efficiently detects all
files related to a warning in linear time, enabling eCPG-Slicer to gather
complete code context across these files. We evaluate LLM4FPM on Juliet
dataset, where it comprehensively outperforms the baseline, achieving an F1
score above 99% across various CWEs. LLM4FPM leverages a free, open-source
model, avoiding costly alternatives and reducing inspection costs by up to
$2758 per run on Juliet, with an average inspection time of 4.7 seconds per
warning. Our work emphasizes the critical impact of precise and complete code
context and highlights the potential of combining program analysis with LLMs,
improving the quality and efficiency of software development. |
---|---|
DOI: | 10.48550/arxiv.2411.03079 |