Measurement invariance in training evaluation: Old question, new context

► Examined the item functioning of parallel Web-based training and traditional paper-and-pencil evaluations. ► Item response theory (IRT) analyses revealed few differences between mediums. ► Confirms equivalence of paper-and-pencil and computer-mediated training evaluations. Technological advances t...

Full description

Saved in:
Bibliographic Details
Published inComputers in human behavior Vol. 27; no. 5; pp. 2005 - 2010
Main Authors William Stoughton, J., Gissel, Amanda, Clark, Andrew P., Whelan, Thomas J.
Format Journal Article
LanguageEnglish
Published Kidlington Elsevier Ltd 01.09.2011
Elsevier
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:► Examined the item functioning of parallel Web-based training and traditional paper-and-pencil evaluations. ► Item response theory (IRT) analyses revealed few differences between mediums. ► Confirms equivalence of paper-and-pencil and computer-mediated training evaluations. Technological advances that have been put to use by organizations have not escaped the training domain. With the shift towards computer-mediated surveys, training evaluations have been converted from traditional paper-and-pencil formats to Web-based environments. This begs the question as to whether or not these modalities are equivalent. Accordingly, this study examined the item functioning of parallel Web-based training evaluations and traditional paper-and-pencil evaluations of a training intervention. Item response theory (IRT) analyses revealed few differences between how an individual would respond to particular items (i.e., differential item functioning) regardless of the modality employed to complete a training evaluation. This provides evidence for the equivalence of paper-and-pencil and computer-mediated training evaluations.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 23
ISSN:0747-5632
1873-7692
DOI:10.1016/j.chb.2011.05.007