SITUATIONAL JUDGMENT TESTS: CONSTRUCTS ASSESSED AND A META-ANALYSIS OF THEIR CRITERION-RELATED VALIDITIES
Situational judgment tests (SJTs) are a measurement method that may be designed to assess a variety of constructs. Nevertheless, many studies fail to report the constructs measured by the situational judgment tests in the extant literature. Consequently, a construct‐level focus in the situational ju...
Saved in:
Published in | Personnel psychology Vol. 63; no. 1; pp. 83 - 117 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
Malden, USA
Blackwell Publishing Inc
01.03.2010
Blackwell Publishing Ltd |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Situational judgment tests (SJTs) are a measurement method that may be designed to assess a variety of constructs. Nevertheless, many studies fail to report the constructs measured by the situational judgment tests in the extant literature. Consequently, a construct‐level focus in the situational judgment test literature is lacking, and researchers and practitioners know little about the specific constructs typically measured. Our objective was to extend the efforts of previous researchers (e.g., McDaniel, Hartman, Whetzel, & Grubb, 2007; McDaniel & Ngyuen, 2001; Schmitt & Chan, 2006) by highlighting the need for a construct focus in situational judgment test research. We identified and classified the construct domains assessed by situational judgment tests in the literature into a content‐based typology. We then conducted a meta‐analysis to determine the criterion‐related validity of each construct domain and to test for moderators. We found that situational judgment tests most often assess leadership and interpersonal skills and those situational judgment tests measuring teamwork skills and leadership have relatively high validities for overall job performance. Although based on a small number of studies, we found evidence that (a) matching the predictor constructs with criterion facets improved criterion‐related validity; and (b) video‐based situational judgment tests tended to have stronger criterion‐related validity than pencil‐and‐paper situational judgment tests, holding constructs constant. Implications for practice and research are discussed. |
---|---|
Bibliography: | ArticleID:PEPS1163 ark:/67375/WNG-QGK0RCZC-H istex:C7724C79459268CEA97CE059B1794072D8ED2822 We are grateful to Winfred Arthur, Jr., Ronald Landis, Michael Burke, and Filip Lievens for reviewing previous drafts of this article and providing valuable suggestions. We also thank Michael McDaniel, Phillip Bobko, and Edgar Kausel for their helpful comments and suggestions on this project. Finally, we acknowledge the work of Jessica Siegel, Helen Terry, and Adela Garza. Authors' Note. This paper is based in part on the master's thesis of Michael S. Christian, which was chaired by Bryan D. Edwards. An earlier version of this paper was presented at the 2007 Annual Conference of the Society for Industrial and Organizational Psychology. |
ISSN: | 0031-5826 1744-6570 |
DOI: | 10.1111/j.1744-6570.2009.01163.x |