More than g: Evidence for the Incremental Validity of Performance‐Based Assessments for Predicting Training Performance

Past research has consistently shown that tests measuring specific cognitive abilities provide little if any incremental validity over tests of general mental ability when predicting performance on the job. In this study, we suggest that the seeming lack of incremental validity may have been due to...

Full description

Saved in:
Bibliographic Details
Published inApplied psychology Vol. 69; no. 2; pp. 302 - 324
Main Authors Nye, Christopher D., Chernyshenko, Oleksandr S., Stark, Stephen, Drasgow, Fritz, Phillips, Henry L., Phillips, Jeffrey B., Campbell, Justin S.
Format Journal Article
LanguageEnglish
Published Oxford Blackwell Publishing Ltd 01.04.2020
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Past research has consistently shown that tests measuring specific cognitive abilities provide little if any incremental validity over tests of general mental ability when predicting performance on the job. In this study, we suggest that the seeming lack of incremental validity may have been due to the type of content that has traditionally been assessed. Therefore, we hypothesised that incremental validity can be obtained using specific cognitive abilities that are less highly correlated with g and are matched to the tasks performed on the job. To test this, we examined a recently developed performance‐based measure that assesses a number of cognitive abilities related to training performance. In a sample of 310 US Navy student pilots, results indicated that performance‐based scores added sizeable incremental validity to a measure of g. The significant increases in R2 ranged from .08 to .10 across criteria. Similar results were obtained after correcting correlations for range restriction, though the magnitude of incremental validity was slightly smaller (ΔR2 ranged from .05 to .07).
ISSN:0269-994X
1464-0597
DOI:10.1111/apps.12171