Methodologies for establishing validity in surgical simulation studies
Validating assessment tools in surgical simulation training is critical to objectively measuring skills. Most reviews do not elicit methodologies for conducting rigorous validation studies. Our study reports current methodological approaches and proposes benchmark criteria for establishing validity...
Saved in:
Published in | Surgery Vol. 147; no. 5; pp. 622 - 630 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
New York, NY
Mosby, Inc
01.05.2010
Elsevier |
Subjects | |
Online Access | Get full text |
ISSN | 0039-6060 1532-7361 1532-7361 |
DOI | 10.1016/j.surg.2009.10.068 |
Cover
Summary: | Validating assessment tools in surgical simulation training is critical to objectively measuring skills. Most reviews do not elicit methodologies for conducting rigorous validation studies. Our study reports current methodological approaches and proposes benchmark criteria for establishing validity in surgical simulation studies.
We conducted a systematic review of studies establishing validity. A PubMed search was performed with the following keywords: “validity/validation,” “simulation,” “surgery,” and “technical skills.” Descriptors were tabulated for 29 methodological variables by 2 reviewers.
A total of 83 studies were included in the review. Of these studies, 60% targeted construct, 24% targeted concurrent, and 5% looked at predictive validity. Less than half (45%) of all the studies reported reliability data. Most studies (82%) were conducted in a single institution with a mean of 37 subjects recruited. Only half of the studies provided rationale for task selection. Data sources included simulator-generated measures (34%), performance assessment by human evaluators (33%), motion tracking (6%), and combined modes (28%). In studies using human evaluators, videotaping was a common (48%) blinding technique; however, 34% of the studies did not blind evaluators. Commonly reported outcomes included task time (86%), economy of motion (51%), technical errors (48%), and number of movements (25%).
The typical validation study comes from a single institution with a small sample size, lacks clear justification for task selection, omits reliability reporting, and poses potential bias in study design. The lack of standardized validation methodologies creates challenges for training centers that survey the literature to determine the appropriate method for their local settings. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 ObjectType-Review-3 content type line 23 ObjectType-Undefined-4 |
ISSN: | 0039-6060 1532-7361 1532-7361 |
DOI: | 10.1016/j.surg.2009.10.068 |