Aggregating partial rankings with applications to peer grading in massive online open courses

We investigate the potential of using ordinal peer grading for the evaluation of students in massive online open courses (MOOCs). According to such grading schemes, each student receives a few assignments (by other students) which she has to rank. Then, a global ranking (possibly translated into num...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Caragiannis, Ioannis, Krimpas, George A, Voudouris, Alexandros A
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 17.11.2014
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We investigate the potential of using ordinal peer grading for the evaluation of students in massive online open courses (MOOCs). According to such grading schemes, each student receives a few assignments (by other students) which she has to rank. Then, a global ranking (possibly translated into numerical scores) is produced by combining the individual ones. This is a novel application area for social choice concepts and methods where the important problem to be solved is as follows: how should the assignments be distributed so that the collected individual rankings can be easily merged into a global one that is as close as possible to the ranking that represents the relative performance of the students in the assignment? Our main theoretical result suggests that using very simple ways to distribute the assignments so that each student has to rank only \(k\) of them, a Borda-like aggregation method can recover a \(1-O(1/k)\) fraction of the true ranking when each student correctly ranks the assignments she receives. Experimental results strengthen our analysis further and also demonstrate that the same method is extremely robust even when students have imperfect capabilities as graders. We believe that our results provide strong evidence that ordinal peer grading can be a highly effective and scalable solution for evaluation in MOOCs.
ISSN:2331-8422