Comparing Proctored and Unproctored Cognitive Ability Testing in High‐Stakes Personnel Selection
ABSTRACT New advances in computerized adaptive testing (CAT) have increased the feasibility of high‐stakes unproctored testing of general mental ability (GMA) in personnel selection contexts. This study presents the results from a within‐subject investigation of the convergent validity of unproctore...
Saved in:
Published in | International journal of selection and assessment Vol. 33; no. 1 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
Oxford
Blackwell Publishing Ltd
01.02.2025
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | ABSTRACT
New advances in computerized adaptive testing (CAT) have increased the feasibility of high‐stakes unproctored testing of general mental ability (GMA) in personnel selection contexts. This study presents the results from a within‐subject investigation of the convergent validity of unproctored tests. Three batteries of cognitive ability tests were administered during personnel selection in the Norwegian Armed Forces. A total of 537 candidates completed two sets of proctored fixed‐length GMA tests before and during the selection process. In addition, an at‐home unproctored CAT battery of tests was administered before the selection process began. Differences and similarities between the convergent validity of the tests were evaluated. The convergent validity coefficients did not significantly differ between proctored and unproctored batteries, both on observed GMA scores and the latent factor level. The distribution and standardized residuals of test scores comparing proctored‐proctored and proctored‐unproctored were overall quite similar and showed no evidence of score inflation or deflation in the unproctored tests. The similarities between proctored and unproctored results also extended to the moderately searchable words similarity test. Although some unlikely individual cases were observed, the overall results suggest that the unproctored tests maintained their convergent validity.
Summary
Convergent validity: The study found no significant differences in the convergent validity of unproctored and proctored general mental ability (GMA) tests. Both observed scores and latent intelligence factors from unproctored tests were comparable to those from proctored tests, supporting the viability of unproctored internet testing (UIT) in high‐stakes contexts.
Minimal cheating evidence: There was no substantial evidence of widespread cheating in the unproctored tests. While a few outliers exhibited unusually high scores in unproctored settings, these cases accounted for less than 1% of the sample, suggesting that cheating does not significantly undermine the validity of UIT.
Technical issues and underperformance: Unproctored tests were more vulnerable to underperformance caused by technical problems or interruptions, as 5% of participants self‐reported issues during testing. These cases highlight the importance of addressing the unstandardized nature of UIT settings.
Score deviation across testing conditions: A small number of scores deviated notably when comparing unproctored and proctored results. Interestingly, similar deviations were also observed between two proctored tests, indicating that such variations might not be solely attributed to the unproctored setting but to individual test conditions or candidate performance on specific occasions. |
---|---|
Bibliography: | The analyses were conducted by the Norwegian Armed Forces to evaluate a potential transition to unproctored cognitive ability screening. Some preliminary findings were presented at the International Miliary Testing Association (IMTA) conference in 2023. ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 0965-075X 1468-2389 |
DOI: | 10.1111/ijsa.70001 |