Asymptotically Distribution-Free (ADF) Interval Estimation of Coefficient Alpha

The point estimate of sample coefficient alpha may provide a misleading impression of the reliability of the test score. Because sample coefficient alpha is consistently biased downward, it is more likely to yield a misleading impression of poor reliability. The magnitude of the bias is greatest pre...

Full description

Saved in:
Bibliographic Details
Published inPsychological methods Vol. 12; no. 2; pp. 157 - 176
Main Authors Maydeu-Olivares, Alberto, Coffman, Donna L, Hartmann, Wolfgang M
Format Journal Article
LanguageEnglish
Published United States American Psychological Association 01.06.2007
Subjects
Online AccessGet more information

Cover

Loading…
More Information
Summary:The point estimate of sample coefficient alpha may provide a misleading impression of the reliability of the test score. Because sample coefficient alpha is consistently biased downward, it is more likely to yield a misleading impression of poor reliability. The magnitude of the bias is greatest precisely when the variability of sample alpha is greatest (small population reliability and small sample size). Taking into account the variability of sample alpha with an interval estimator may lead to retaining reliable tests that would be otherwise rejected. Here, the authors performed simulation studies to investigate the behavior of asymptotically distribution-free (ADF) versus normal-theory interval estimators of coefficient alpha under varied conditions. Normal-theory intervals were found to be less accurate when item skewness greater than 1 or excess kurtosis greater than 1. For sample sizes over 100 observations, ADF intervals are preferable, regardless of item skewness and kurtosis. A formula for computing ADF confidence intervals for coefficient alpha for tests of any size is provided, along with its implementation as an SAS macro.
ISSN:1082-989X
DOI:10.1037/1082-989x.12.2.157