Peer review is a widely accepted practice, particularly in writing classes, from high school through college and graduate school. The goal of peer review is typically two-fold:

  1. To help students get valuable feedback at the draft stage of their work.
  2. To help students more deeply understand the goals of the assignment.

Unfortunately, peer review is often used as a busy-work activity, or a process that takes advantage of conscientious students while allowing others to do superficial work. For instance, many teachers will hand out a list of peer review questions in class, and then give students 30 minutes to review two papers written by their colleagues. An open-ended question might be:

  • “Did the writer adequately summarize and discuss the topic? Explain.”

Many students will write “Yes” under this question and move on. Without review by the instructor (difficult to do when many instructors have 50 to 150 students), these students can destroy the social contract of a peer review. Other students will spend a lot of time making line edits to the draft – correcting grammar, making minor changes to sentences etc. At the draft stage this is probably inappropriate – the focus should be on ideas and big-picture organization, not embroidery. Plus, some students aren’t qualified to be dictating where the semicolon should go.

Students aren’t alone in having these problems; in 1982, Nancy Sommers published her highly influential piece, “Responding to Student Writing,” in which she commented about how little teachers understand the value of their commenting practices, and that, essentially, they don’t know what their comments do. She raised numerous long-standing points in her evaluation of teachers’ first and second draft comments on papers.

Two of her major findings:

  1. Teachers provide paradoxical comments that lead students to focus more on “what teachers commanded them to do than on what they are trying to say” (151).
  2. She found “most teachers’ comments are not text-specific and could be interchanged, rubber-stamped, from text to text” (152). One result is that revising, for students, becomes a “guessing game” (153). Sommers concluded by saying, “The challenge we face as teachers is to develop comments which will provide an inherent reason for students to revise” (156).

Most teachers have experienced this last point when a student asks, “what do you want.” The student doesn’t understand the larger goal of the assignment and has learned that achievement comes through figuring out the personal foibles of their current teacher.

These outcomes are unfortunate, because peer review (and written feedback from teachers) can be one of the most powerful learning experiences for students.

From our perspective, peer review should:

  1. Help the students improve their work through the drafting process.
  2. Deepen the understanding of the assignment and its goals for both authors and reviewers (and teachers!).
  3. Allow instructors to assign more authentic work without requiring they read and grade piles of papers – so they do more coaching than grading.
  4. Give students opportunities to create authentic work – that is, peer reviews written (or spoken) to a real audience: the author. Peer review, in the words of Drexel University’s Dr. Scott Warnock, “can be some of the most important writing students ever do – because they have a real audience for their work.

Peer Review and the LMS

Before considering pedagogy, the first challenge to peer review is largely one of workflow. How to effectively set up teams of reviewers? How to manage student work? How to enforce deadlines and reward students who are conscientious?

Learning Management Systems like Blackboard, Blackboard Vista/CE6, and Moodle can streamline the peer review process.

Through much trial and error and philosophizing, along with combining best practices from many educators, from middle-school through graduate school, we propose a process to minimize the load on the teacher and maximize the benefits of peer review. This process leverages the best of the Learning Management Systems to automate much of the process, leaving the teacher to focus on value-added activities far more useful than busywork.



Students need to do something. Without getting into curriculum design debates, we will assume that ‘something’ is fairly clearly defined. Let’s also assume that there are defined deadlines for work. Here’s a suggested timeline for a typical 4 to 5 page paper:
Drafts: Tuesday, 9/22  by 11:59pm
Peer reviews: Friday, 9/26 by 11:59pm
Final Drafts: Tuesday, 9/29 by 11:59pm

One of the immediately valuable aspects to peer review is its ability to combat procrastination. Too often we assign projects, give a deadline four or five weeks out, and expect students to perform. No manager would assign a major project to a new employee and then check back four weeks later (at least no effective manager). They would require status reports, check in informally from time to time etc. So by requiring some sort of draft a week before the final deadline, teachers can help students get started.

To help gain students’ attention, we highly recommend that the quality of peer reviews be worth 20% or more of the final Assignment grade (based largely on effort, not skill, since this can be difficult work for students).

We also recommend that students be required to include a cover letter (in the same document), addressed to their reviewers, that details their goals with the assignment and specifies the top two or three areas in which they would like feedback. This helps students reflect on their work and also give the peer reviewers a ‘heads up’ concerning specific issues.


This is the stage where things can begin to get challenging for teachers new to the idea. Too often the criteria are in our heads, or we don’t share them with students until well after the students have begun working on their projects. Luckily, the 80/20 rule applies to student work: 80% of the projects we assign, from middle school through graduate school, are covered by 20% of the potential criteria we might think of. So there are lots of models to borrow from.
The criterion listed below helps students decide how original an argument might be (useful from high school through college), breaks the issue down and uses language students can probably understand.

It could be easily modified for a particular audience – made either more sophisticated or simplified to highlight the differences between performance levels. You can find the entire rubric in Appendix A.

This rubric might be exactly the same for students and teachers, or teachers might use a slightly different version (with more direct language). The rubric should be posted along with the Assignment. Ideally, students would help design and refine the rubric.

It is important to use open-ended questions when designing peer reviews; this example gives just the declarative ‘observations’ that help the student distinguish between levels of performance. To see how open ended questions can help structure the review process, see Appendix C: Maximizing Peer Review with Waypoint Outcomes.

  1. (Excellent) Wow, this is a highly insightful argument – you go far beyond our classroom discussion and readings.
  2. (Good) Okay, you`ve definitely introduced an interesting (not predictable) argument..but it doesn`t seem thought through enough or lacks necessary evidence.
  3. (Fair) This is bordering on a summary of information rather than an argument.
  4. (Poor) There is hardly any insight here – it`s mostly summary of various points of view or conventional wisdom (that is, predictable).
  5. (Unacceptable) ZERO argument. The author never presents fresh ideas or interpretation.

There are a number of pedagogical choices implied in the above sample criterion: whether to include specific performance descriptors (Good etc.), how sophisticated or informal to make the language etc. Each educator should make these decisions to suit their needs.

This rubric or criteria set should be made available to students in advance, to guide the development of their draft and to structure their response to the student.


Many instructors will try to assign specific students to specific peer review groups. Certainly, if a teacher knows their students very well and can specify teams, then all the better.

However, creating and managing groups manually can create a pile of work for the instructor. What happens when a student emails to say they are sick, and they can’t post a draft? Does the instructor need to edit the peer review groups and move students around?

The trick is to reward the students who complete their work on time and not reward (notice the term isn’t necessarily punish) those who do not. Whether collected in a face-to-face class, or enforced with deadlines in the LMS, a certain percentage of students will not submit a draft on time, endangering the entire process or requiring instructor intervention to rebalance the work. In our experience, assigning exact ‘peer review partners’ is inefficient and unnecessary to achieve the stated goals of a peer review process.

Whether randomly created or designed, the Groups feature of most CMSs can help structure the who-reviews-whom question.
There are benefits to different strategies, but here’s an approach using Groups and Discussion Boards. Let’s assume that we want students to peer review two of their fellow students’ papers:

  1. Create Groups and place six students in each (so a class/section with 24 students would have four Groups). This way, every student is guaranteed to get two reviews, even if one or two of their fellow students neglects to post a draft or neglects their work.
  2. Create a Discussion Topic for each Group – so only the students in the Group have access to the discussion. Make sure the Discussion Topic will be unavailable after the deadline.
  3. Require students to post an attachment and the plain text of their draft. This way, the instructor can quickly review drafts without opening attachments, while students can open fully formatted versions.
  4. Students review the two drafts posted before (chronologically) their own. This way the process isn’t destroyed by students who do not post on time, and again we reward those who adhere to deadlines. Students who post first review the students who post last.
  5. Put some teeth into the process (see Appendix B for sample instructions intended for students). We don’t recommend punishment, but rather the requirement that students actively do something – visit the writing center or similar.

Students must post a draft by the deadline to receive peer reviews. If they do not post their own work by the deadline, they still must review two other students’ work.

Obviously the particular solution will need to suit the teacher’s sensibilities, but the idea is to give the students who might miss the deadline more work to do. If they know this in advance, they are more likely to seek the easiest path – following instructions.

Peer reviewers can simply post a response to the author’s original discussion post, using the rubric (criteria) as a guide.


Discussion Topics can be convenient for grouping students and segmenting the reviews; most LMS platforms will tell the instructor how many unread messages are posted to a. So in our example, there should be 6 posts the morning after the deadline. But often students will post twice (they forget to attach, post a newer version etc.). If an instructor has only one class, it may be possible to click into each discussion topic and see who has posted work. But this quickly becomes busy-work with multiple sections.

An alternative is to use the Assignment drop-box, which in most LMS platforms more clearly shows late submissions (or no submissions). The drawback is losing the small group approach, because students will be able to see ALL submissions. Again depending on instructor sensibility and philosophy, using the Assignment drop-box can be a much more efficient approach. Groups of students need to be documented along with the assignment specifications, so that students can search all submissions for the students they are assigned to review.

If the Assignment feature is used, it may be possible to allow students to type or upload comments, depending on the LMS. Or a separate Discussion topic can be setup for responses. The same guiding principle, of grouping students into sub-groups and requiring they review submissions made chronologically just before theirs can elegantly handle the inevitability of non-compliance with instructions.


A similar problem awaits the teacher trying to manage and review the comments students write for one another. One solution is simply to wait for the submission of final drafts, and direct students to include the peer reviews they received. Again, inevitably, some students will forget to attach their peer reviews, defeating the instructor who wants to evaluate the quality of peer reviews.

A much less desirable solution is for the teacher to click through potentially hundreds of pages in the LMS reading posts or comments.
Workarounds can be used: students might need to post their reviews to a separate Discussion Topic, so the instructor can more easily review compliance and quality.


Increasingly instructors are collecting work electronically – for environmental reasons, convenience (no lugging piles of papers around), and to grade electronically. Whatever the process for collecting work, we suggest students be required to submit:

  1. A cover letter, addressed to the instructor that:
    1. details their goals for the assignment
    2. discusses their progress from first draft to final
    3. reflects on feedback they have received earlier in their academic careers
    4. responds to the comments they received from the peer review process
    5. The final draft, complete with a works cited page if required
    6. First draft
    7. Peer review(s)

    The cover letter requirement helps students learn different rhetorical strategies (switching from a formal academic voice to a more business-like letter-writing voice), and can be a terrific leading indicator for instructors when it comes time to grade. Hastily written cover letters are often indicative of hastily written drafts. Good cover letters can help students maximize credit for the assignment because the instructor may better understand the subtleties of the work


    All suggestions to this point have assumed a generic Learning Management System with the typical features of Assignment Drop-boxes and Discussion Topics.
    Some LMSs have simple evaluation tools, like ‘grade forms’ in Blackboard Vista/CE6, the Outcomes module in Moodle 1.9+, and the peer review feature in Blackboard 8.0+. These features have varying degrees of utility, and none of them are grounded in the latest composition research – or even the common sense to assume that there is more to evaluating complex work than simple Likert Scales.

    Waypoint Outcomes, developed by Subjective Metrics and used by over 40 institutions in the US, Canada, and Europe, has focused on peer review from its earliest days. Whatever the approach taken by an instructor, Waypoint can structure students’ response, help the instructor coach rather than click, and vastly improve the peer review experience for all parties.

    Waypoint Outcomes is a tool for building, sharing, and using sophisticated rubrics to create exceptional feedback. Waypoint is tightly integrated with Blackboard, Blackboard Vista/CE6, and Moodle.

    Using Waypoint, and instructor develops a set of criteria, creates a peer review Project to grant access to the rubric to students, then manages the peer review process. Students access Waypoint via the LMS, click their way through a detailed rubric, annotating and explaining their choices as they go, then save their work and email the evaluation to the author of the work.

    For more on Waypoint, visit


    Sommers, Nancy (May 1982). “Responding to Student Writing.” College Composition and Communication. 33.2: 148-156.

    Conan, Neil. “Procrastination Nation.” NPR: Talk of the Nation 12/15/2005 13 Jun 2008 <>.