SBFT Tool Competition 2024 -- Python Test Case Generation Track
Test case generation (TCG) for Python poses distinctive challenges due to the language's dynamic nature and the absence of strict type information. Previous research has successfully explored automated unit TCG for Python, with solutions outperforming random test generation methods. Nevertheles...
Saved in:
Main Authors | , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
26.01.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Test case generation (TCG) for Python poses distinctive challenges due to the
language's dynamic nature and the absence of strict type information. Previous
research has successfully explored automated unit TCG for Python, with
solutions outperforming random test generation methods. Nevertheless,
fundamental issues persist, hindering the practical adoption of existing test
case generators. To address these challenges, we report on the organization,
challenges, and results of the first edition of the Python Testing Competition.
Four tools, namely UTBotPython, Klara, Hypothesis Ghostwriter, and Pynguin were
executed on a benchmark set consisting of 35 Python source files sampled from 7
open-source Python projects for a time budget of 400 seconds. We considered one
configuration of each tool for each test subject and evaluated the tools'
effectiveness in terms of code and mutation coverage. This paper describes our
methodology, the analysis of the results together with the competing tools, and
the challenges faced while running the competition experiments. |
---|---|
DOI: | 10.48550/arxiv.2401.15189 |