Using simulation to evaluate prediction techniques [for software]

The need for accurate software prediction systems increases as software becomes larger and more complex. A variety of techniques have been proposed, but none has proved consistently accurate. The underlying characteristics of the data set influence the choice of the prediction system to be used. It...

Full description

Saved in:
Bibliographic Details
Published inProceedings Seventh International Software Metrics Symposium pp. 349 - 359
Main Authors Shepperd, M., Kadoda, G.
Format Conference Proceeding
LanguageEnglish
Published IEEE 2001
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The need for accurate software prediction systems increases as software becomes larger and more complex. A variety of techniques have been proposed, but none has proved consistently accurate. The underlying characteristics of the data set influence the choice of the prediction system to be used. It has proved difficult to obtain significant results over small data sets; consequently, we required large validation data sets. Moreover, we wished to control the characteristics of such data sets in order to systematically explore the relationship between accuracy, choice of prediction system and data set characteristics. Our solution has been to simulate data, allowing both control and the possibility of large validation cases. We compared regression, rule induction and nearest neighbours (a form of case-based reasoning). The results suggest that there are significant differences depending upon the characteristics of the data set. Consequently, researchers should consider the prediction context when evaluating competing prediction systems. We also observed that the more "messy" the data and the more complex the relationship with the dependent variable, the more variability in the results. This became apparent since we sampled two different training sets from each simulated population of data. In the more complex cases, we observed significantly different results depending upon the training set. This suggests that researchers will need to exercise caution when comparing different approaches and utilise procedures such as bootstrapping in order to generate multiple samples for training purposes.
ISBN:9780769510439
0769510434
ISSN:1530-1435
DOI:10.1109/METRIC.2001.915542