Gauging Item Alignment Through Online Systems While Controlling for Rater Effects

The alignment of test items to content standards is critical to the validity of decisions made from standards‐based tests. Generally, alignment is determined based on judgments made by a panel of content experts with either ratings averaged or via a consensus reached through discussion. When the poo...

Full description

Saved in:
Bibliographic Details
Published inEducational measurement, issues and practice Vol. 34; no. 1; pp. 22 - 33
Main Authors Anderson, Daniel, Irvin, Shawn, Alonzo, Julie, Tindal, Gerald A.
Format Journal Article
LanguageEnglish
Published Washington Blackwell Publishing Ltd 01.03.2015
Wiley-Blackwell
Wiley Subscription Services, Inc
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The alignment of test items to content standards is critical to the validity of decisions made from standards‐based tests. Generally, alignment is determined based on judgments made by a panel of content experts with either ratings averaged or via a consensus reached through discussion. When the pool of items to be reviewed is large, or the content‐matter experts are broadly distributed geographically, panel methods present significant challenges. This article illustrates the use of an online methodology for gauging item alignment that does not require that raters convene in person, reduces the overall cost of the study, increases time flexibility, and offers an efficient means for reviewing large item banks. Latent trait methods are applied to the data to control for between‐rater severity, evaluate intrarater consistency, and provide item‐level diagnostic statistics. Use of this methodology is illustrated with a large pool (1,345) of interim‐formative mathematics test items. Implications for the field and limitations of this approach are discussed.
Bibliography:ark:/67375/WNG-LJ7FJ31W-8
istex:D76C45720471201C412A19E145D1E58153E5154F
ArticleID:EMIP12038
ISSN:0731-1745
1745-3992
DOI:10.1111/emip.12038