A rubric for assessment, usually in the form of a matrix or grid, is a tool used to interpret and mark students' work against criteria and standards. Rubrics are sometimes called "criteria sheets", "grading schemes", or "scoring guides". Rubrics can be designed for any content domain.
A rubric makes explicit a range of assessment criteria and expected performance standards. Assessors evaluate a student's performance against all of these, rather than assigning a single subjective score. A rubric:
- makes students aware of all expectations related to the assessment task, and helps them evaluate their own work as it progresses
- helps teachers apply consistent standards when assessing qualitative tasks, and promotes consistency in shared marking.
Assessment rubrics can be used for assessing learning at all levels, from discrete assignments within a course through to program-level capstone projects and larger research or design projects and learning portfolios. You can use rubrics to structure discussions with students about different levels of performance on an assessment task. They can employ the rubric during peer-assessment and self-assessment.
Benefits
Assessment rubrics:
- provide a framework that clarifies assessment requirements and standards of performance for different marks. This supports assessment as learning: students can see what is important and where to focus their learning efforts.
- enable very clear and consistent communication with students about assessment requirements and about how different levels of performance earn different marks. They allow assessors to give very specific feedback to students on their performance.
- when used for self-assessment and peer assessment, make students aware of assessment processes and procedures, enhance their meta-cognitive awareness and improve their capacity to assess their own work.
- can result in richer feedback to students, giving them a clearer idea of where they sit in terms of an ordered progression towards increased expertise in a learning domain.
- help staff teams develop a shared language for talking about learning and assessment by engaging them in rubric-based conversations about quality.
- help assessors efficiently and reliably interpret and grade students' work.
- systematically illuminate gaps and weaknesses in students' understanding against particular criteria, helping teachers target areas to address.
Challenges
Using assessment rubrics can present a number of challenges:
- When learning outcomes relate to higher levels of cognition (for example, evaluating or creating), assessment designers can find it difficult to specify criteria and standards with exactitude. This can be a particular issue in disciplines or activities requiring creativity or other hard-to-measure capabilities.
- It can be challenging for designers to encompass different dimensions of learning outcomes (cognitive, psychomotor, affective) within specific criteria and standards. Performance in the affective domain in particular can be difficult to distinguish according to strict criteria and standards.
- Assessment rubrics are inherently indeterminate (Sadler, 2009), particularly when it comes to translating judgements on each criterion of an analytic rubric into marks.
- Breaking down the assessment into complicated, detailed criteria may increase the marking workload for staff, and may lead to:
- distorted grading decisions (Sadler, 2009) or
- students becoming over-dependent on the rubric and less inclined to develop their own judgement by creating, or contributing to the creation of, assessment rubrics ( ).
Strategies
Design a rubric
An assessment rubric can be analytic or holistic.
- Analytic rubrics have several dimensions, with performance indicators for levels of achievement in each dimension.
- Holistic rubrics assess the task as a whole according to one scale, and are appropriate for less structured tasks, such as open-ended problems and creative products.
Assessment rubrics are composed of three elements:
- a set of criteria that provides an interpretation of the stated objectives (performance, behaviour, quality)
- a range of different levels of performance between highest and lowest
- descriptors that specify the performance corresponding to each level, to allow assessors to interpret which level has been met.
One useful design strategy is to take a generic assessment rubric that matches well with the assessment task objectives, discipline, level and other contextual setting, and adapt it for your own use, rewriting the attribute descriptions to reflect the course context, aims and learning outcomes, and to apply it to the specific assessment task.
Decide how the judgements at each level of attainment will flow through into the overall grading process and how rubric levels correspond to grades. Does the attainment of "advanced" skill or knowledge mean that a distinction or high distinction will be awarded? Does "developing" mean resubmission or fail?
Assess with rubrics
- Ensure that assessment rubrics are prepared and available for students well before they begin work on tasks, so that the rubric contributes to their learning as they complete the work.
- Discuss assessment rubrics with students in class time. Use these discussions to refine and improve rubrics in response to students' common misunderstandings and misconceptions.
- Practise using rubrics in class. Have students assess their own and their peers' work.
- Frame your assessment feedback to students in the terms laid out in the rubric, so that they can clearly see where they have succeeded or performed less well in the task.
Ensure fairness
Provide the assessment rubric for a task to students early, to increase its value as a learning tool. For example, you might distribute it as part of the task briefing and guidelines presentation. This helps students understand the task, and allows them to raise any concerns or questions about the task and how it will be assessed.
Write rubrics in plain English, and phrase them so that they are as unambiguous as possible.
Use technology
- Learning-management systems (e.g. Moodle) often allow the use of rubrics in assessment, including peer and self-assessment. In Moodle, you can create a rubric and use it to grade online activities such as assignments, discussions, blogs and wikis.
- GradeMark (part of the Turnitin suite of tools) provides a rubric function for online marking.
- Dedicated group peer assessment tools such as iPeer and WebPA also have a rubric function.
- A free online tool, iRubric, allows you to create, adapt and share rubrics online.
Case study: Video about iUNSW Rubrik Application
[See Transcript of video]
iRubrik
This video shows how the Mechanical Engineering project (ENGG1000, T1 2011) used the iUNSW Rubrik iPad marking app to mark the final project competition, making the process much quicker and more efficient.
Professor Bev Oliver on Standards Based Assessment - Keynote (short version)
iUNSW Rubrik iOS App used in ENGG1000 T1 2011
UNSW Rubrics in Action - Chemical Engineering
Assessing a final year thesis
As part of his final year undergraduate course in Chemical Engineering, Dr Graeme Bushell has designed and tested the rubric described below over several semesters.
Students in Chemical Engineering, Industrial Chemistry and Food Science programs at UNSW are required to deliver a poster at the end of their final year thesis, explaining their research results. The assessment task aligns with:
- the UNSW graduate capability of producing "scholars who are capable of effective communication", and
- the Engineers Australia stage 1 competency "effective oral and written communication in the professional and lay domains".
The posters are presented over one morning in the final week of semester, with school academic staff and postdoctoral fellows browsing the work. The session runs along the lines of a conference poster session, with students explaining their projects to small groups and/or individuals throughout the session and answering questions as appropriate. Academics are each assigned a set of posters to mark according to specified criteria, using marking sheets, with rubrics, which are collected at the end of the poster session. Each student receives at least 4 assessments. The marking sheets are then collated by the course convenor and a final mark allocated.
A change in assessment scheme
The assessment scheme used for the posters was changed from first semester 2012, as the new convenor for the final year thesis courses (Bushell) felt that the old scheme used too many assessment criteria, and that the implementation of a standards-based approach would improve practice—and more closely align with UNSW recommended practice at the time, which is now policy.
The rubric
The criteria are listed and a range of performance standards between lowest and highest are included. Descriptors describe each level of performance.
The rubric is first presented to students in the Course Outline and they are encouraged to discuss it with their supervisor. The marking scheme for the rubric is also presented in the Course Outline. The poster assessment is worth 15% of the total marks for the course.
The marking sheet/table
Dr Bushell uses a simple layout for this rubric, as this allows more flexibility than a tabular format in terms of distribution of performance bands within a criterion, the number of performance bands used in each criterion and the weighting of different criteria.
His marking sheet is shown below, followed by an example of how the same criteria might look in a tabular format.
Poster Assessment Marking Sheet
(Original format of rubric, as used by Graeme Bushell, School of Chemical Engineering)
Student name:___________________________
Marker name:____________________________
Date:___________________________________
Context
Put a tick next to the description which best describes how well the student explained why the work was done.
□ The student cannot explain why the research was done.
□ The student attempts to explain why the work was done but you don't think they really understand.
□ The student is able to explain why the work was done in direct terms.
□ The student is able to explain the broader context that the work fits into – why it was done and how important it is.
Content
Put a tick next to the description which best describes the quality of the work that was done.
□ The work appears to be incomplete – it fails to address the stated aims.
□ The work contains serious errors – the conclusions are cast into serious doubt.
□ The work contains some minor errors of design or execution that are unlikely to undermine the main conclusions.
□ The work appears to have been completed without errors.
Communication
Put a tick next to the description which best describes how well the student presented the work.
□ Taken together, graphical and verbal communication are so poor that you are left unsure what the project is about.
□ Multiple deficiencies: more than one of aims, methods, results and conclusions are not clear.
□ One of the following is not clear: aims, methods, results, conclusions.
□ Aims, methods, results and conclusions are clear but only after probing. Some aspects of the poster or presentation were poorly considered.
□ Aims, methods, results, conclusions are all clear. The poster is adequate.
□ Aims, methods, results, conclusions are all clear. The poster is attractive.
□ Aims, methods, results, conclusions are all clear. The poster is attractive and the presentation engaging.
Q&A
Put a tick next to the description which best describes how well the student answered questions.
□ The student is effectively unable to answer questions about the project.
□ The student attempts to answer questions about the project but clearly doesn't really understand.
□ The student is able to answer questions about the project – you are fairly sure they understand what they're doing.
□ The student listens carefully and answers questions easily and directly – they are clearly across the project.
Poster Assessment Rubric
(Marking sheet criteria presented in grid format. Each Performance standard cell can be allocated a mark or grade band, as determined in the specific context.)
Student name:___________________________
Marker name:____________________________
Date:___________________________________
Criteria |
Performance standards |
Additional comments |
|||
Rationale |
The student is able to explain the broader context that the work fits into—why it was done and how important it is. |
The student is able to explain why the work was done in direct terms. |
The student attempts to explain why the work was done but you don't think they really understand. |
The student cannot explain why the research was done. |
|
Content |
The work appears to have been completed without errors. |
The work contains some minor errors of design or execution that are unlikely to undermine the main conclusions. |
The work contains serious errors—the conclusions are cast into serious doubt. |
The work appears to be incomplete—it fails to address the stated aims. |
|
Communication |
Aims, methods, results, conclusions are all clear. The poster is: |
Aims, methods, results and conclusions are clear but only after probing. Some aspects of the poster or presentation were poorly considered. |
Multiple deficiencies: more than one of aims, methods, results and conclusions are not clear. |
Taken together, graphical and verbal communication are so poor that you are left unsure what the project is about. |
|
Q&A |
The student listens carefully and answers questions easily and directly—they are clearly across the project. |
The student is able to answer questions about the project—you are fairly sure they understand what they're doing. |
The student attempts to answer questions about the project but clearly doesn't really understand. |
The student is effectively unable to answer questions about the project. |
|
Resources
- iRubric: online tool for creating and sharing rubrics.
- Rubric Best Practices, Examples, and Templates
Dawson, P. (2017). Assessment rubrics: towards clearer and more replicable design,research and practice. Assessment & Evaluation in Higher Education, 42(3), 347–360. http://dx.doi.org/10.1080/02602938.2015.1111294
Northern Illinois University Center for Innovative Teaching and Learning (2012). Rubrics for assessment. In Instructional guide for university faculty and teaching assistants. Retrieved from https://www.niu.edu/citl/resources/guides/instructional-guide
Ragupathi, K., & Lee, A. (2020). Beyond fairness and consistency in grading: The role of rubrics in higher education. In C. S. Sanger & N. W. Gleason (Eds), Diversity and inclusion in global higher education: Lessons from across Asia (pp. 73-95), Palgrave Macmillan.
Sadler, D.R. (2009). Indeterminacy in the use of preset criteria for assessment and grading. Assessment & Evaluation in Higher Education, 34(2), 159–179.