Variability in students' evaluating processes in peer assessment with calibrated peer review
This study investigated students' evaluating process and their perceptions of peer assessment when they engaged in peer assessment using Calibrated Peer Review. Calibrated Peer Review is a web‐based application that facilitates peer assessment of writing. One hundred and thirty‐two students in...
Saved in:
Published in | Journal of computer assisted learning Vol. 33; no. 2; pp. 178 - 190 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
Oxford
Wiley-Blackwell
01.04.2017
Wiley Subscription Services, Inc |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | This study investigated students' evaluating process and their perceptions of peer assessment when they engaged in peer assessment using Calibrated Peer Review. Calibrated Peer Review is a web‐based application that facilitates peer assessment of writing. One hundred and thirty‐two students in an introductory environmental science course participated in the study. Two self‐reported surveys and a focus group interview were administered during the semester. The peer assessment data and demographic information were collected at the end of the semester. Although the study results support the agreement between peers and an expert, the variations in a group and individual level were found, in particular, when students evaluated mid‐quality or low‐quality writings regardless of their reviewing ability. Students tended to perceive that the process of evaluating peers' and own writings was helpful in their learning. Further, students' positive perceptions of peer assessment were associated with their understanding of the values of peer assessment tasks and their perceptions of achieving the course goal. We concluded that instructors should provide specific guidelines for how to decide a rating, use actual students' essays instead of instructor‐developed samples to train students and require written explanation for rubric questions to reduce variation in students' ratings and promote learning.
Lay Description
What is already known about this topic:
Peer assessment of writing has positive formative effects on student achievement.
Student reviewers can improve their writing more by giving comments than by receiving comments.
The accuracy of peer ratings remains a concern in implementing peer assessment with computer‐assisted tools.
Some students believe that peers are not qualified to review and assess students' work.
What this paper adds:
Providing specific guidelines for how to decide a rating related to rubric questions can reduce variation in students' ratings.
Students' positive perception of peer assessment is associated with their understanding of purposes and values of peer assessment tasks related to course goals.
Implications for practice and/or policy:
Instructors need to promote leaning values of peer assessment by discussing purposes of the tasks related to course goals and explaining how students will benefit from it.
Using actual students' writings instead of instructor‐developed samples could be more effective to train students. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ISSN: | 0266-4909 1365-2729 |
DOI: | 10.1111/jcal.12176 |