Is peer grading an effective assessment method for open and online learning? What about in MOOCs where student feedback may be the only means of determining a pass or fail in a course? This posts examine peer grading and suggests what conditions must be present in order for peer grading to be effective.
Debbie Morrison takes a view on peer grading that is orthogonal to the view espoused by Jonathan Rees, scooped here a few days ago (http://sco.lt/6wRRPF). Their disagreement, Debbie says, is rooted in their different views on how people learn. Whereas Rees holds 'a cognitive and instructor-focused learning orientation', Debbie Morrison's view is social constructivist. Her experience is also different than Reese's. Peer grading worked fine, she says, in the Digital Cultures course she recently completed.
Debbie Morrison does agree with Jonathan Reese that there may be an issue with the quality of the feedback. Social loafers and people who lack the required skills, produced useless feedback. So, for peer grading to be effective a number of conditions have to be in place, she concludes. These include similarity of skill level, low-stake assignments, not awarding credits, learner maturity.
It seems to me that there is less disagreement than Debbie Morrison suggests. First, although they may hold different views on how people learn (instructivist versus social constructivist), I don't think this affects their disagreement at all. Second, I guess both agree that if the conditions Morrison lists (and perhaps a few others) are met, peer grading is likely to work. The residual disagreement is purely factual. Whereas Morrison happened to have a good experience in her Digital Cultures course, Jonathan Reese did not in his. That could be a matter of coincidence or a matter of bad design. So, the real questions should be:
i) what are the conditions that guarantee productive peer grading (theoretical),
ii) can MOOCs, given the constraints under which they operate, meet those conditions (a factual design issue)?
But in the background another, much more interesting question lurks. It has to do with formative and summative assessments. Peer grading is by definition summative. You may wonder whether we had better not completely forget about summative assessments in not-for-credit courses such as xMOOCs, whether summative assessments by several people aren't much more valuable. For one, the assessments help the assessee, admittedly to the extent that the peer assessments are done skillfully. Indeed, a community approach may be followed in which assessors and assessee discuss the merits of the assessments and thus not only help the assessee to get a better grasp of what the course is about but also the assessors to hone their assessment skills.
I would suggest that the whole idea of peer grading in MOOCs is a remnant of a defunct, underlying philosophy that courses always are about credits, even if technically you cannot guarantee their quality. Once we admit that MOOCs are not about credits but about learning, the whole issue of peer grading disappears. The only issue that remains is about learning productively in such settings. (Formative) peer assessment is one way of doing so (see also the discussion in this scoop about peer support http://sco.lt/7WiOtV). (@pbsloep)