Enriching Automatic Test Case Generation by Extracting Relevant Test Inputs from Bug Reports
The quality of a software is highly dependent on the quality of the tests it is submitted to. Writing tests for bug detection is thus essential. However, it is time-consuming when done manually. Automating test cases generation has therefore been an exciting research area in the software engineering...
Saved in:
Main Authors | , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
22.12.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | The quality of a software is highly dependent on the quality of the tests it
is submitted to. Writing tests for bug detection is thus essential. However, it
is time-consuming when done manually. Automating test cases generation has
therefore been an exciting research area in the software engineering community.
Most approaches have been focused on generating unit tests. Unfortunately,
current efforts often do not lead to the generation of relevant inputs, which
limits the efficiency of automatically generated tests. Towards improving the
relevance of test inputs, we present \name, a technique for exploring bug
reports to identify input values that can be fed to automatic test generation
tools. In this work, we investigate the performance of using inputs extracted
from bug reports with \name to generate test cases with Evosuite. The
evaluation is performed on the Defects4J benchmark. For Defects4J projects, our
study has shown that \name successfully extracted 68.68\% of relevant inputs
when using regular expression in its approach versus 50.21\% relevant inputs
without regular expression. Further, our study has shown the potential to
improve the Line and Instruction Coverage across all projects. Overall, we
successfully collected relevant inputs that led to the detection of 45 bugs
that were previously undetected by the baseline. |
---|---|
AbstractList | The quality of a software is highly dependent on the quality of the tests it
is submitted to. Writing tests for bug detection is thus essential. However, it
is time-consuming when done manually. Automating test cases generation has
therefore been an exciting research area in the software engineering community.
Most approaches have been focused on generating unit tests. Unfortunately,
current efforts often do not lead to the generation of relevant inputs, which
limits the efficiency of automatically generated tests. Towards improving the
relevance of test inputs, we present \name, a technique for exploring bug
reports to identify input values that can be fed to automatic test generation
tools. In this work, we investigate the performance of using inputs extracted
from bug reports with \name to generate test cases with Evosuite. The
evaluation is performed on the Defects4J benchmark. For Defects4J projects, our
study has shown that \name successfully extracted 68.68\% of relevant inputs
when using regular expression in its approach versus 50.21\% relevant inputs
without regular expression. Further, our study has shown the potential to
improve the Line and Instruction Coverage across all projects. Overall, we
successfully collected relevant inputs that led to the detection of 45 bugs
that were previously undetected by the baseline. |
Author | Ouédraogo, Wendkûuni C Lo, David Kaboré, Kader Bissyandé, Tegawendé F Klein, Jacques Plein, Laura Habib, Andrew |
Author_xml | – sequence: 1 givenname: Wendkûuni C surname: Ouédraogo fullname: Ouédraogo, Wendkûuni C – sequence: 2 givenname: Laura surname: Plein fullname: Plein, Laura – sequence: 3 givenname: Kader surname: Kaboré fullname: Kaboré, Kader – sequence: 4 givenname: Andrew surname: Habib fullname: Habib, Andrew – sequence: 5 givenname: Jacques surname: Klein fullname: Klein, Jacques – sequence: 6 givenname: David surname: Lo fullname: Lo, David – sequence: 7 givenname: Tegawendé F surname: Bissyandé fullname: Bissyandé, Tegawendé F |
BackLink | https://doi.org/10.48550/arXiv.2312.14898$$DView paper in arXiv |
BookMark | eNotj8tqwzAQRbVoF22aD-iq-gG7ejiWskyNmwYCheJlwIylcWuIZSPLIfn72kkWl4E7h2HOM3lwnUNCXjmLE71asXfw5-YUC8lFzBO91k_kkDvfmL_G_dLNGLoWQmNogUOgGQxIt-jQT13naHWh-Tl4MGGGf_CIJ3Dhxu5cP4aB1r5r6cc4b_vOh-GFPNZwHHB5nwtSfOZF9hXtv7e7bLOPIFU6Eoobhpwpya0GZdQUBA6yqlVqrbKqmn4FnYq1SWuZqkowA8wqVqFMpJUL8nY7e9Ure9-04C_lrFleNeU_RohQBw |
ContentType | Journal Article |
Copyright | http://creativecommons.org/licenses/by-nc-sa/4.0 |
Copyright_xml | – notice: http://creativecommons.org/licenses/by-nc-sa/4.0 |
DBID | AKY GOX |
DOI | 10.48550/arxiv.2312.14898 |
DatabaseName | arXiv Computer Science arXiv.org |
DatabaseTitleList | |
Database_xml | – sequence: 1 dbid: GOX name: arXiv.org url: http://arxiv.org/find sourceTypes: Open Access Repository |
DeliveryMethod | fulltext_linktorsrc |
ExternalDocumentID | 2312_14898 |
GroupedDBID | AKY GOX |
ID | FETCH-LOGICAL-a678-271c0e10731d8a7c7a7cea1a3bf76dd7d7b489a8629c6f367b20ca0d70be343d3 |
IEDL.DBID | GOX |
IngestDate | Mon Jan 08 05:39:57 EST 2024 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | false |
IsScholarly | false |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-a678-271c0e10731d8a7c7a7cea1a3bf76dd7d7b489a8629c6f367b20ca0d70be343d3 |
OpenAccessLink | https://arxiv.org/abs/2312.14898 |
ParticipantIDs | arxiv_primary_2312_14898 |
PublicationCentury | 2000 |
PublicationDate | 2023-12-22 |
PublicationDateYYYYMMDD | 2023-12-22 |
PublicationDate_xml | – month: 12 year: 2023 text: 2023-12-22 day: 22 |
PublicationDecade | 2020 |
PublicationYear | 2023 |
Score | 1.9102248 |
SecondaryResourceType | preprint |
Snippet | The quality of a software is highly dependent on the quality of the tests it
is submitted to. Writing tests for bug detection is thus essential. However, it
is... |
SourceID | arxiv |
SourceType | Open Access Repository |
SubjectTerms | Computer Science - Software Engineering |
Title | Enriching Automatic Test Case Generation by Extracting Relevant Test Inputs from Bug Reports |
URI | https://arxiv.org/abs/2312.14898 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwdV3PS8MwFH5sO3kRRWX-JAevxfzomuY4R-cUVJAKPQgjaVLw0o2tHfO_9yWt6MVDL8kjkK8073tpvi8AtxWmtaRUVSQUU1HsRBkpzmIkcpQ6a2wSMy8Ufn5JFu_xUzEpBkB-tDB6s__cdf7AZnuH5IPjt5yqdAhDzv2RrYfXovs5Gay4-vjfOOSYoelPkpgfwWHP7si0ex3HMHD1CXxkNS42fqeHTNtmFTxSSY6rMZlhCiGd8bPHh5gvku2boFvC4Dcv_caJd7GP9bpttsTLQch963vDZv8p5PMsny2i_lKDSGNeiLhkJXVYcwlmUy1LiY_TTAtTycRaaaXBaWisM1SZVCKRhtNSUyupcSIWVpzBqF7VbgzEIjVxjhntrIy1oJoyg-NOBK2waJLpOYwDFMt151ux9CgtA0oX_3ddwoG_Ud2f2OD8CkbNpnXXmHcbcxPA_wZLIYNG |
link.rule.ids | 228,230,786,891 |
linkProvider | Cornell University |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Enriching+Automatic+Test+Case+Generation+by+Extracting+Relevant+Test+Inputs+from+Bug+Reports&rft.au=Ou%C3%A9draogo%2C+Wendk%C3%BBuni+C&rft.au=Plein%2C+Laura&rft.au=Kabor%C3%A9%2C+Kader&rft.au=Habib%2C+Andrew&rft.date=2023-12-22&rft_id=info:doi/10.48550%2Farxiv.2312.14898&rft.externalDocID=2312_14898 |