Parametric Predictive Bootstrap Method for the Reproducibility of Hypothesis Tests

Hypothesis tests are essential tools in applied statistics, but their results can vary when repeated. The reproducibility probability (RP) quantifies the probability of obtaining the same test outcome—either rejecting or not rejecting the null hypothesis—if a hypothesis test is repeated under identi...

Full description

Saved in:
Bibliographic Details
Published inJournal of statistical theory and practice Vol. 19; no. 2
Main Authors Aldawsari, Abdulrahman M. A., Coolen-Maturi, Tahani, Coolen, Frank P. A.
Format Journal Article
LanguageEnglish
Published Cham Springer International Publishing 01.06.2025
Subjects
Online AccessGet full text
ISSN1559-8608
1559-8616
DOI10.1007/s42519-025-00438-2

Cover

Loading…
More Information
Summary:Hypothesis tests are essential tools in applied statistics, but their results can vary when repeated. The reproducibility probability (RP) quantifies the probability of obtaining the same test outcome—either rejecting or not rejecting the null hypothesis—if a hypothesis test is repeated under identical conditions. In this paper, we apply the parametric predictive bootstrap (PP-B) method to evaluate the reproducibility of parametric tests and compare it with the nonparametric predictive bootstrap (NPI-B) method. The explicitly predictive nature of both methods aligns well with the concept of RP. Simulation studies demonstrate that PP-B provides RP values with less variability than NPI-B, benefiting from the assumed parametric model. The bootstrap approach offers a flexible framework for assessing test reproducibility and can be extended to a wide range of parametric tests.
ISSN:1559-8608
1559-8616
DOI:10.1007/s42519-025-00438-2