The difference of model robustness assessment using cross‐validation and bootstrap methods

The validation principles on Quantitative Structure Activity Relationship issued by Organization for Economic and Co‐operation and Development describe three criteria of model assessment: goodness of fit, robustness and prediction. In the case of robustness, two ways are possible as internal validat...

Full description

Saved in:
Bibliographic Details
Published inJournal of chemometrics Vol. 38; no. 6
Main Authors Lasfar, Rita, Tóth, Gergely
Format Journal Article
LanguageEnglish
Published Chichester Wiley Subscription Services, Inc 01.06.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The validation principles on Quantitative Structure Activity Relationship issued by Organization for Economic and Co‐operation and Development describe three criteria of model assessment: goodness of fit, robustness and prediction. In the case of robustness, two ways are possible as internal validation: bootstrap and cross‐validation. We compared these validation metrics by checking their sample size dependence, rank correlations to other metrics and uncertainty. We used modeling methods from multivariate linear regression to artificial neural network on 14 open access datasets. We found that the metrics provide similar sample size dependence and correlation to other validation parameters. The individual uncertainty originating from the calculation recipes of the metrics is much smaller for both ways than the part caused by the selection of the training set or the training/test split. We concluded that the metrics of the two techniques are interchangeable, but the interpretation of cross‐validation parameters is easier according to their similar range to goodness‐of‐fit and prediction metrics. Furthermore, the variance originating from the random elements of the calculation of cross‐validation metrics is slightly smaller than those of bootstrap ones, if equal calculation load is applied. The two methods provide close to the same information on robustness, but we suggest to use cross‐validation, because: a) Bootstrap values are outliers within the metrics for other validation tasks as goodness‐of‐fit or predictivity. b) The uncertainty of the robustness calculation is smaller for cross‐validation at equal calculation load.
ISSN:0886-9383
1099-128X
DOI:10.1002/cem.3530