The Role of Practice in Promoting Learning
The Web is an excellent medium for presenting knowledge, whether through text or (arguably more engagingly) video lectures/tutorials. However to transform the passive reception of knowledge into real understanding it is necessary for learners to use the newly acquired knowledge to answer questions and solve problems does real understanding take place. Peer assessment is one form of practice rapidly increasing in use with MOOCs. Its benefits and limitations are considered below.
Practice fulfills various roles, eg to:
- aid retention and promote consolidation of learned content;
- highlight areas needing further work;
- allow teachers to adapt classes, eg revisiting poorly understood concepts;
- provide learners with proof of accomplishment (certification).
Computerized Assessment – Factual Recall
The Web excels at promoting/testing factual recall, eg:
* “flashcards” as reminders of key facts
* finite/multiple choice questions (MCQs, of various forms)
* short response questions (eg number or single word as answer)
For deep understanding learners need more than just factual recall. They must be able to select and synthesize relevant facts and apply them to a novel, non-trivial problem. A business studies class may use MCQs to test recall of business concepts, but these are only useful when applied, eg a case study is given describing a struggling business and learners are asked to formulate a plan to improve its fortunes.
Longer assignments are less-suited to computerized assessment as artificial intelligence is still far from being able to sufficiently comprehend the subtlety of complex argument. In mass-delivered education the learner:instructor ratio is far too high for individual marking by the class teacher.
Peer Assessment – The MOOC Solution
MOOCs have sought to incorporate in-depth assignments by adopting the approach of peer assessment.
When setting assignments instructors also issue a rubric indicating how submissions will be marked. Once submissions are made they are anonymized and randomly allocated among students for marking. Typically each assignment is marked by a number of peers to average out inconsistencies (applying the “wisdom of crowds”).
Motivations for participation in peer assessment include:
- it is a requirement for receiving a grade on your own work;
- viewing alternative responses to the same question is a valuable learning opportunity;
- it is a fair and altruistic payback for receiving quality, free/low-cost education.
Potential Pitfalls of Peer Assessment
- It relies on the altruism of students to assess fairly.
- Students may be tempted to take only a rudimentary glance at peers work and issue grades largely at random (the vast majority are studying part-time alongside numerous other life demands).
- Students may be tempted to give low marks in order to make their own work appear better.
- Students may bring other issues to the table, eg allow marks to be influenced by whether they personally agree or disagree with the argument being presented.
- Massive open courses often result in a wide variance in students knowledge and experience and hence ability to fairly interpret the rubric.
- People study MOOCs for different reasons and with different levels of importance which may lead to inconsistent levels of diligence in marking.
- People vary in their “generosity” towards others, some even tending to “abuse” their newly acquired authority (see “Plagiarism” below).
Plagiarism in Peer Assessment
On a MOOC I am currently (at time of writing) undertaking the issue of plagiarism emerged as a major concern from the peer assessed activity. It wasn’t something I’d even considered until a lengthy and heated debate arose on the discussion boards, with some quite extreme responses being expressed (almost a cyber-equivalent of the Stanford Prison experiment).
As a result I went back and ran my 6 assigned essays through http://smallseotools.com/plagiarism-checker/ and to my surprise found one had been almost entirely lifted from the Web and a second copied and pasted significant chunks without attribution.
Although a small sample, combined with the discussion, this does suggest plagiarism is a significant issue and raises a strong case for i) informing participants what constitutes plagiarism, ii) auto-checking submissions for non-original content rather than relying on peer assessors to identify issues.
An Alternative to Peer Assessment – Delegation
Rather than (or as well as?) assigning peers to assess work, the task could be undertaken by recent graduates of the course concerned, eg those 1 or 2 grades ahead of current students.
As the markers need only be a little ahead of those being assessed there is likely to be a more ready supply than full professors or even TAs.
As markers have nothing to gain (their own work is not being assessed) there would likely need to be a small fee involved (perhaps just a few dollars per assignment).