Code Agents are State of the Art Software Testers
Rigorous software testing is crucial for developing and maintaining high-quality code, making automated test generation a promising avenue for both improving software quality and boosting the effectiveness of code generation methods. However, while code generation with Large Language Models (LLMs) i...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
18.06.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Rigorous software testing is crucial for developing and maintaining
high-quality code, making automated test generation a promising avenue for both
improving software quality and boosting the effectiveness of code generation
methods. However, while code generation with Large Language Models (LLMs) is an
extraordinarily active research area, test generation remains relatively
unexplored. We address this gap and investigate the capability of LLM-based
Code Agents for formalizing user issues into test cases. To this end, we
propose a novel benchmark based on popular GitHub repositories, containing
real-world issues, ground-truth patches, and golden tests. We find that LLMs
generally perform surprisingly well at generating relevant test cases with Code
Agents designed for code repair exceeding the performance of systems designed
specifically for test generation. Further, as test generation is a similar but
more structured task than code generation, it allows for a more fine-grained
analysis using fail-to-pass rate and coverage metrics, providing a dual metric
for analyzing systems designed for code repair. Finally, we find that generated
tests are an effective filter for proposed code fixes, doubling the precision
of SWE-Agent. |
---|---|
DOI: | 10.48550/arxiv.2406.12952 |