Comparison of Multiple-Indicators, Multiple-Causes– and Item Response Theory–Based Analyses of Subgroup Differences

This research provides a direct comparison of effect size estimates based on structural equation modeling (SEM), item response theory (IRT), and raw scores. Differences between the SEM, IRT, and raw score approaches are examined under a variety of data conditions (IRT models underlying the data, tes...

Full description

Saved in:
Bibliographic Details
Published inEducational and psychological measurement Vol. 68; no. 4; pp. 587 - 602
Main Authors Willse, John T., Goodman, Joshua T.
Format Journal Article
LanguageEnglish
Published Los Angeles, CA SAGE Publications 01.08.2008
Sage
SAGE PUBLICATIONS, INC
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This research provides a direct comparison of effect size estimates based on structural equation modeling (SEM), item response theory (IRT), and raw scores. Differences between the SEM, IRT, and raw score approaches are examined under a variety of data conditions (IRT models underlying the data, test lengths, magnitude of group differences, and relative size of reference and focal groups). Results show that all studied methods perform similarly. All methods tend to underestimate effects as effect sizes become larger. SEM-based approaches to effect size estimation perform somewhat better at shorter test lengths, whereas approaches based on IRT and raw score perform somewhat better at longer test lengths. Although these differences between methods are detectable, they are small in magnitude.
ISSN:0013-1644
1552-3888
DOI:10.1177/0013164407312601