A critical review on the evaluation of automated program repair systems

Automated Program Repair (APR) has attracted significant attention from software engineering research and practice communities in the last decade. Several teams have recorded promising performance in fixing real bugs and there is a race in the literature to fix as many bugs as possible from establis...

Full description

Saved in:
Bibliographic Details
Published inThe Journal of systems and software Vol. 171; p. 110817
Main Authors Liu, Kui, Li, Li, Koyuncu, Anil, Kim, Dongsun, Liu, Zhe, Klein, Jacques, Bissyandé, Tegawendé F.
Format Journal Article
LanguageEnglish
Published Elsevier Inc 01.01.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Automated Program Repair (APR) has attracted significant attention from software engineering research and practice communities in the last decade. Several teams have recorded promising performance in fixing real bugs and there is a race in the literature to fix as many bugs as possible from established benchmarks. Gradually, repair performance of APR tools in the literature has gone from being evaluated with a metric on the number of generated plausible patches to the number of correct patches. This evolution is necessary after a study highlighting the overfitting issue in test suite-based automatic patch generation. Simultaneously, some researchers are also insisting on providing time cost in the repair scenario as a metric for comparing state-of-the-art systems. In this paper, we discuss how the latest evaluation metrics of APR systems could be biased. Since design decisions (both in approach and evaluation setup) are not always fully disclosed, the impact on repair performance is unknown and computed metrics are often misleading. To reduce notable biases of design decisions in program repair approaches, we conduct a critical review on the evaluation of patch generation systems and propose eight evaluation metrics for fairly assessing the performance of APR tools. Eventually, we show with experimental data on 11 baseline program repair systems that the proposed metrics allow to highlight some caveats in the literature. We expect wide adoption of these metrics in the community to contribute to boosting the development of practical, and reliably performable program repair tools.
ISSN:0164-1212
1873-1228
DOI:10.1016/j.jss.2020.110817