What makes finite-state models more (or less) testable?

This paper studies how details of a particular model can effect the efficacy of a search for detects. We find that if the test method is fixed, we can identity classes of software that are more or less testable. Using a combination of model mutators and machine learning, we find that we can isolate...

Full description

Saved in:
Bibliographic Details
Published inAutomated Software Engineering: Proceedings of the 17th IEEE international conference on Automated software engineering; 23-27 Sept. 2002 pp. 237 - 240
Main Authors Owen, D., Menzies, T., Cukic, B.
Format Conference Proceeding
LanguageEnglish
Published IEEE 2002
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper studies how details of a particular model can effect the efficacy of a search for detects. We find that if the test method is fixed, we can identity classes of software that are more or less testable. Using a combination of model mutators and machine learning, we find that we can isolate topological features that significantly change the effectiveness of a defect detection tool. More specifically, we show that for one defect detection tool (a stochastic search engine) applied to a certain representation (finite state machines), we can increase the average odds of finding a defect from 69% to 91%. The method used to change those odds is quite general and should apply to other defect detection tools being applied to other representations.
Bibliography:SourceType-Conference Papers & Proceedings-1
ObjectType-Conference Paper-1
content type line 25
ISBN:9780769517360
0769517366
ISSN:1938-4300
2643-1572
DOI:10.1109/ASE.2002.1115019