Preempting Flaky Tests via Non-Idempotent-Outcome Tests

Regression testing can greatly help in software development, but it can be seriously undermined by flaky tests, which can both pass and fail, seemingly nondeterministically, on the same code commit. Flaky tests are an emerging topic in both research and industry. Prior work has identified multiple c...

Full description

Saved in:
Bibliographic Details
Published in2022 IEEE/ACM 44th International Conference on Software Engineering (ICSE) pp. 1730 - 1742
Main Authors Wei, Anjiang, Yi, Pu, Li, Zhengxi, Xie, Tao, Marinov, Darko, Lam, Wing
Format Conference Proceeding
LanguageEnglish
Published ACM 01.05.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Regression testing can greatly help in software development, but it can be seriously undermined by flaky tests, which can both pass and fail, seemingly nondeterministically, on the same code commit. Flaky tests are an emerging topic in both research and industry. Prior work has identified multiple categories of flaky tests, developed techniques for detecting these flaky tests, and analyzed some detected flaky tests. To proactively detect, i.e., preempt, flaky tests, we propose to detect non-idempotent-outcome (NIO) tests, a novel category related to flaky tests. In particular, we run each test twice in the same test execution environment, e.g., run each Java test twice in the same Java Virtual Machine. A test is NIO if it passes in the first run but fails in the second. Each NIO test has side effects and "self-pollutes" the state shared among test runs. We perform experiments on both Java and Python open-source projects, detecting 223 NIO Java tests and 138 NIO Python tests. We have inspected all 361 detected tests and opened pull requests that fix 268 tests, with 192 already accepted, only 6 rejected, and the remaining 70 pending.
ISSN:1558-1225
DOI:10.1145/3510003.3510170