Should we test the model assumptions before running a model-based test?

Statistical methods are based on model assumptions, and it is statistical folklore that a method's model assumptions should be checked before applying it. This can be formally done by running one or more misspecification tests of model assumptions before running a method that requires these ass...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors M Iqbal Shamsudheen, Hennig, Christian
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 17.04.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Statistical methods are based on model assumptions, and it is statistical folklore that a method's model assumptions should be checked before applying it. This can be formally done by running one or more misspecification tests of model assumptions before running a method that requires these assumptions; here we focus on model-based tests. A combined test procedure can be defined by specifying a protocol in which first model assumptions are tested and then, conditionally on the outcome, a test is run that requires or does not require the tested assumptions. Although such an approach is often taken in practice, much of the literature that investigated this is surprisingly critical of it. Our aim is to explore conditions under which model checking is advisable or not advisable. For this, we review results regarding such "combined procedures" in the literature, we review and discuss controversial views on the role of model checking in statistics, and we present a general setup in which we can show that preliminary model checking is advantageous, which implies conditions for making model checking worthwhile.
ISSN:2331-8422