How to Use Rubrics

A rubric is a document that describes the criteria by which students’ assignments are graded. Rubrics can be helpful for:

  • Making grading faster and more consistent (reducing potential bias). 
  • Communicating your expectations for an assignment to students before they begin. 

Moreover, for assignments whose criteria are more subjective, the process of creating a rubric and articulating what it looks like to succeed at an assignment provides an opportunity to check for alignment with the intended learning outcomes and modify the assignment prompt, as needed.

Why rubrics?

Rubrics are best for assignments or projects that require evaluation on multiple dimensions. Creating a rubric makes the instructor’s standards explicit to both students and other teaching staff for the class, showing students how to meet expectations.

Additionally, the more comprehensive a rubric is, the more it allows for grading to be streamlined—students will get informative feedback about their performance from the rubric, even if they don’t have as many individualized comments. Grading can be more standardized and efficient across graders.

Finally, rubrics allow for reflection, as the instructor has to think about their standards and outcomes for the students. Using rubrics can help with self-directed learning in students as well, especially if rubrics are used to review students’ own work or their peers’, or if students are involved in creating the rubric.

How to design a rubric

1. Consider the desired learning outcomes

What learning outcomes is this assignment reinforcing and assessing? If the learning outcome seems “fuzzy,” iterate on the outcome by thinking about the expected student work product. This may help you more clearly articulate the learning outcome in a way that is measurable.  

2. Define criteria

What does a successful assignment submission look like? As described by Allen and Tanner (2006), it can help develop an initial list of categories that the student should demonstrate proficiency in by completing the assignment. These categories should correlate with the intended learning outcomes you identified in Step 1, although they may be more granular in some cases. For example, if the task assesses students’ ability to formulate an effective communication strategy, what components of their communication strategy will you be looking for? Talking with colleagues or looking at existing rubrics for similar tasks may give you ideas for categories to consider for evaluation.

If you have assigned this task to students before and have samples of student work, it can help create a qualitative observation guide. This is described in Linda Suskie’s book Assessing Student Learning, where she suggests thinking about what made you decide to give one assignment an A and another a C, as well as taking notes when grading assignments and looking for common patterns. The often repeated themes that you comment on may show what your goals and expectations for students are. An example of an observation guide used to take notes on predetermined areas of an assignment is shown here.

In summary, consider the following list of questions when defining criteria for a rubric (O’Reilly and Cyr, 2006):

  • What do you want students to learn from the task?
  • How will students demonstrate that they have learned?
  • What knowledge, skills, and behaviors are required for the task?
  • What steps are required for the task?
  • What are the characteristics of the final product?

After developing an initial list of criteria, prioritize the most important skills you want to target and eliminate unessential criteria or combine similar skills into one group. Most rubrics have between 3 and 8 criteria. Rubrics that are too lengthy make it difficult to grade and challenging for students to understand the key skills they need to achieve for the given assignment. 

3. Create the rating scale

According to Suskie, you will want at least 3 performance levels: for adequate and inadequate performance, at the minimum, and an exemplary level to motivate students to strive for even better work. Rubrics often contain 5 levels, with an additional level between adequate and exemplary and a level between adequate and inadequate. Usually, no more than 5 levels are needed, as having too many rating levels can make it hard to consistently distinguish which rating to give an assignment (such as between a 6 or 7 out of 10). Suskie also suggests labeling each level with names to clarify which level represents the minimum acceptable performance. Labels will vary by assignment and subject, but some examples are: 

  • Exceeds standard, meets standard, approaching standard, below standard
  • Complete evidence, partial evidence, minimal evidence, no evidence

4. Fill in descriptors

Fill in descriptors for each criterion at each performance level. Expand on the list of criteria you developed in Step 2. Begin to write full descriptions, thinking about what an exemplary example would look like for students to strive towards. Avoid vague terms like “good” and make sure to use explicit, concrete terms to describe what would make a criterion good. For instance, a criterion called “organization and structure” would be more descriptive than “writing quality.” Describe measurable behavior and use parallel language for clarity; the wording for each criterion should be very similar, except for the degree to which standards are met. For example, in a sample rubric from Chapter 9 of Suskie’s book, the criterion of “persuasiveness” has the following descriptors:

  • Well Done (5): Motivating questions and advance organizers convey the main idea. Information is accurate.
  • Satisfactory (3-4): Includes persuasive information.
  • Needs Improvement (1-2): Include persuasive information with few facts.
  • Incomplete (0): Information is incomplete, out of date, or incorrect.

These sample descriptors generally have the same sentence structure that provides consistent language across performance levels and shows the degree to which each standard is met.

5. Test your rubric

Test your rubric using a range of student work to see if the rubric is realistic. You may also consider leaving room for aspects of the assignment, such as effort, originality, and creativity, to encourage students to go beyond the rubric. If there will be multiple instructors grading, it is important to calibrate the scoring by having all graders use the rubric to grade a selected set of student work and then discuss any differences in the scores. This process helps develop consistency in grading and making the grading more valid and reliable.

Types of Rubrics

If you would like to dive deeper into rubric terminology, this section is dedicated to discussing some of the different types of rubrics. However, regardless of the type of rubric you use, it’s still most important to focus first on your learning goals and think about how the rubric will help clarify students’ expectations and measure student progress towards those learning goals.

Depending on the nature of the assignment, rubrics can come in several varieties (Suskie, 2009):

Checklist Rubric

This is the simplest kind of rubric, which lists specific features or aspects of the assignment which may be present or absent. A checklist rubric does not involve the creation of a rating scale with descriptors. See example from 18.821 project-based math class.

Rating Scale Rubric

This is like a checklist rubric, but instead of merely noting the presence or absence of a feature or aspect of the assignment, the grader also rates quality (often on a graded or Likert-style scale). See example from 6.811 assistive technology class.

Descriptive Rubric

A descriptive rubric is like a rating scale, but including descriptions of what performing to a certain level on each scale looks like. Descriptive rubrics are particularly useful in communicating instructors’ expectations of performance to students and in creating consistency with multiple graders on an assignment. This kind of rubric is probably what most people think of when they imagine a rubric. See example from 15.279 communications class.

Holistic Scoring Guide

Unlike the first 3 types of rubrics, a holistic scoring guide describes performance at different levels (e.g., A-level performance, B-level performance) holistically without analyzing the assignment into several different scales. This kind of rubric is particularly useful when there are many assignments to grade and a moderate to a high degree of subjectivity in the assessment of quality. It can be difficult to have consistency across scores, and holistic scoring guides are most helpful when making decisions quickly rather than providing detailed feedback to students. See example from 11.229 advanced writing seminar.

The kind of rubric that is most appropriate will depend on the assignment in question.

Implementation tips

Rubrics are also available to use for Canvas assignments. See this resource from Boston College for more details and guides from Canvas Instructure.

References

Allen, D., & Tanner, K. (2006). Rubrics: Tools for Making Learning Goals and Evaluation Criteria Explicit for Both Teachers and Learners. CBE—Life Sciences Education, 5(3), 197-203. doi:10.1187/cbe.06-06-0168

Cherie Miot Abbanat. 11.229 Advanced Writing Seminar. Spring 2004. Massachusetts Institute of Technology: MIT OpenCourseWare, https://ocw.mit.edu. License: Creative Commons BY-NC-SA.

Haynes Miller, Nat Stapleton, Saul Glasman, and Susan Ruff. 18.821 Project Laboratory in Mathematics. Spring 2013. Massachusetts Institute of Technology: MIT OpenCourseWare, https://ocw.mit.edu. License: Creative Commons BY-NC-SA.

Lori Breslow, and Terence Heagney. 15.279 Management Communication for Undergraduates. Fall 2012. Massachusetts Institute of Technology: MIT OpenCourseWare, https://ocw.mit.edu. License: Creative Commons BY-NC-SA.

O’Reilly, L., & Cyr, T. (2006). Creating a Rubric: An Online Tutorial for Faculty. Retrieved from https://www.ucdenver.edu/faculty_staff/faculty/center-for-faculty-development/Documents/Tutorials/Rubrics/index.htm

Suskie, L. (2009). Using a scoring guide or rubric to plan and evaluate an assessment. In Assessing student learning: A common sense guide (2nd edition, pp. 137-154). Jossey-Bass.

William Li, Grace Teo, and Robert Miller. 6.811 Principles and Practice of Assistive Technology. Fall 2014. Massachusetts Institute of Technology: MIT OpenCourseWare, https://ocw.mit.edu. License: Creative Commons BY-NC-SA.