'What would my classmates say?' An international study of the prediction-based method of course evaluation

Objectives  Traditional student feedback questionnaires are imperfect course evaluation tools, largely because they generate low response rates and are susceptible to response bias. Preliminary research suggests that prediction‐based methods of course evaluation ‐ in which students estimate their pe...

Full description

Saved in:
Bibliographic Details
Published inMedical education Vol. 47; no. 5; pp. 453 - 462
Main Authors Schönrock-Adema, Johanna, Lubarsky, Stuart, Chalk, Colin, Steinert, Yvonne, Cohen-Schotanus, Janke
Format Journal Article
LanguageEnglish
Published Oxford, UK Blackwell Publishing Ltd 01.05.2013
Wiley-Blackwell
Wiley Subscription Services, Inc
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Objectives  Traditional student feedback questionnaires are imperfect course evaluation tools, largely because they generate low response rates and are susceptible to response bias. Preliminary research suggests that prediction‐based methods of course evaluation ‐ in which students estimate their peers’ opinions rather than provide their own personal opinions ‐ require significantly fewer respondents to achieve comparable results and are less subject to biasing influences. This international study seeks further support for the validity of these findings by investigating: (i) the performance of the prediction‐based method, and (ii) its potential for bias. Methods  Participants (210 Year 1 undergraduate medical students at McGill University, Montreal, Quebec, Canada, and 371 Year 1 and 385 Year 3 undergraduate medical students at the University Medical Center Groningen [UMCG], University of Groningen, Groningen, the Netherlands) were randomly assigned to complete course evaluations using either the prediction‐based or the traditional opinion‐based method. The numbers of respondents required to achieve stable outcomes were determined using an iterative process. Differences between the methods regarding the number of respondents required were analysed using t‐tests. Differences in evaluation outcomes between the methods and between groups of students stratified by four potentially biasing variables (gender, estimated general level of achievement, expected test result, satisfaction after examination completion) were analysed using multivariate analysis of variance (manova). Results  Overall response rates in the three student cohorts ranged from 70% to 94%. The prediction‐based method required significantly fewer respondents than the opinion‐based method (averages of 26–28 and 67–79 respondents, respectively) across all samples (p < 0.001), whereas the outcomes achieved were fairly similar. Bias was found in four of 12 opinion‐based condition comparisons (three sites, four variables), and in only one comparison in the prediction‐based condition. Conclusions  Our study supports previous findings that prediction‐based methods require significantly fewer respondents to achieve results comparable with those obtained through traditional course evaluation methods. Moreover, our findings support the hypothesis that prediction‐based responses are less subject to bias than traditional opinion‐based responses. These findings lend credence to prediction‐based as an accurate and efficient method of course evaluation. Discuss ideas arising from this article at ‘discuss’
Bibliography:ArticleID:MEDU12126
ark:/67375/WNG-QB11GWCQ-4
istex:03B11983F22C33F3EA91A411B42314B4B2F943BE
ObjectType-Article-1
SourceType-Scholarly Journals-1
content type line 14
ObjectType-Feature-2
content type line 23
ObjectType-Undefined-3
ISSN:0308-0110
1365-2923
1365-2923
DOI:10.1111/medu.12126