Skip to main content
UNSW Sydney Logo
Teaching
Search
  • About
  • Teaching
    • New to teaching
    • Educational design
    • Assessment and feedback
    • Evaluating education
    • AI in teaching and learning
    • More...
  • Educational Technology
    • Support
    • Training
    • EdTech resources
    • Media & immersive
  • Events & News
    • Upcoming events
    • Recent news
    • Event recordings & resources
    • Subscribe to education news
  • Awards
    • Awards
    • Fellowships
    • Gathering evidence of your teaching practice
  • Professional Development
    • Beginning to Teach (BTT)
    • Teaching Accelerator Program
    • Foundations of L&T (FULT)
    • Course Design Institute (CDI)
    • Program Level Approach to Assessment
    • Self-paced learning
    • Academic mentoring
  • Contact & Support
    • Education contacts and support
    • UNSW's Teaching Commons

Breadcrumb

  1. Teaching
  2. Teaching practice
  3. Assessment
  4. Assessment methods

Assessing by Multiple Choice Questions

Multiple choice question (MCQ) tests can be useful for formative assessment and to stimulate students' active and self-managed learning. They improve students' learning performance and their perceptions of the quality of their learning experience (Velan et al., 2008). When using MCQ tests for formative learning, you might still want to assign a small weight to them in the overall assessment plan for the course, to indicate to students that their grasp of the material tested is important.

MCQ tests are strongly associated with assessing lower-order cognition such as the recall of discrete facts. Because of this, assessors have questioned their use in higher education. It is possible to design MCQ tests to assess higher-order cognition (such as synthesis, creative thinking and problem-solving), but questions must be drafted with considerable skill if such tests are to be valid and reliable. This takes time and entails significant subjective judgement.

When determining whether an MCQ test is at the appropriate cognitive level, you may want to compare it with the levels as set out in Krathwohl's (2002) revision to Bloom's Taxonomy of Educational Objectives. If an MCQ test is, in fact, appropriate to the learning outcomes you want to assess and the level of cognition they involve, the next question is whether it would be best used for formative assessment (to support students' self-management of their learning), or for summative assessment (to determine the extent of students' learning at a particular point). When MCQ tests are being used for summative assessment, it's important to ask whether this is because of the ease of making them or for solid educational reasons. Where MCQ tests are appropriate, ensure that you integrate them effectively into assessment design. MCQ tests should never constitute the only or major form of summative assessment in university-level courses.

When to use

MCQ tests are useful for assessing lower-order cognitive processes, such as the recall of factual information, although this is often at the expense of higher-level critical and creative reasoning processes. Use MCQ tests when, for example, you want to:

  • assess lower-order cognition such as the recall of discrete facts, particularly if they will be essential to higher-order learning later in the course
  • gather information about students' pre-course understanding, knowledge gaps and misconceptions, to help plan learning and teaching approaches.
  • provide students with an accessible way to review course material, check that they understand key concepts and obtain timely feedback to help them manage their own learning
  • test students' broad knowledge of the curriculum and learning outcomes.

It should be noted that the closed-ended nature of MCQ tests makes them particularly inappropriate for assessing originality and creativity in thinking.

Benefits

Automation

MCQ tests are readily automated, with numerous systems available to compile, administer, mark and provide feedback on tests according to a wide range of parameters.  Marking can be done by human markers with little increase in marking load; or automatically, with no additional demands on teachers or tutors.

Objectivity

Although MCQ tests can lack nuance in that the answers are either right or wrong (provided the questions are well-designed), this has the advantage of reducing markers' bias and increasing overall objectivity.  Moreover, because the tests do not require the students to formulate and write their own answers, the students' writing ability (which can vary widely even within a homogeneous cohort) becomes much less of a subjective factor in determining their grasp of the material.  It should be noted, however, that they are highly prone to cultural bias (Bazemore-James et al., 2016).

The development of question banks

MCQ tests can be refined and enhanced over time to incorporate new questions into an ever-growing question pool that can be used in different settings.

Challenges

While MCQ tests are quick and straighforward to administer and mark, they require a large amount of up-front design effort to ensure that they are valid, relevant, fair and as free as possible from cultural, gender or racial bias.  It's also challenging to write MCQs that resist mere guesswork. This is particularly the case if the questions are intended to test higher-order cognition such as synthesis, creative thinking and problem-solving.

The use of MCQ tests as summative assessments can encourage students to adopt superficial approaches to learning. This superficiality is exacerbated by the lack of in-depth, critical feedback inherent in a highly standardised assessment.

MCQ tests can disadvantage students with reading or English-language difficulties, regardless of how well they understand the content being assessed.

Strategies

Plan a summative MCQ test

If you decide that an MCQ test is appropriate for summative assessment according to the objectives and outcomes for a course, let students know in the course outline that you'll be using it.

Table 1 sets out a number of factors to consider in the test planning stage.

Table 1: Factors in planning MCQ test design

Questions

Factors to consider

When should it be conducted?

  • Timing: early, mid-, late or post course?
  • Frequency: regularly, weekly, occasionally, once?
  • If in exam week, avoid timetable clashes by formally scheduling the test as an exam.

Where should it be conducted?

  • In a specified physical location, online?
  • If online, how will you manage different time zones?
  • Are online software tools available to facilitate the construction, conduct and scoring of the MCQ test, and to provide results and feedback?

Are the costs justified?

  • How much time will be needed for development and what will that cost?
  • If the test is on paper, what is the cost of printing it?
  • What are the costs of exam venues and invigilation?
  • What are the costs of marking, if this is not automated?
  • Is there a more cost-effective method – for example, the use of existing validated questions and question banks or pools?

How will you manage security?

  • How will you ensure the security of exam questions before the exam?
  • Will exams undertaken in computing laboratories be invigilated?
  • How will you securely manage results information?
  • How will you provide results to students cost-effectively, but without jeopardising their privacy?

How will you manage risks?

  • Which specialist staff or skills do you need to consult (Learning & Teaching staff, IT support staff, examinations unit staff, disability services)?
  • What is the contingency plan? For example, what will happen if system outages occur, or computers malfunction in laboratories?
  • How will you manage test completion by students in different time zones?
  • Have you observed copyright provisions when using existing MCQ tests and questions?

How will you score the test?

  • Should all MCQ items in a test be equally weighted?
  • Will students score zero for uncompleted or wrong answers? (Using negative marks to reduce the incidence of guessing is not recommended.)
  • Should the mark required for a pass be 50% or should it be raised – even to 100% – where you are testing essential factual knowledge?
  • Whose involvement in scoring needs to be secured – for example, teaching teams, casual demonstrators, students – or will scoring be automated?

How will you provide feedback?

  • Will feedback be timely enough to allow for learning improvement within the course? For example, can you give feedback immediate by setting up an answer-contingent progression through the test? Or can you guarantee feedback within a specified short period?
  • How can you go beyond simple notification of a student's score to focus feedback on positive learning improvement?
  • How will students be able to use generic feedback about class-wide performances on the MCQ test to interpret their individual results?
  • Who will provide feedback – teachers, or students through peer assessment or self-assessment?

How will you assure and improve quality?

  • How will you manage MCQ test scoring processes? For example, will you train scorers and provide scorer guidelines?
  • Will your team of assessors develop questions and question banks  collaboratively, so as to enhance your test's validity?
  • Can you establish peer-review processes to check MCQ tests and questions for alignment with course objectives, logical sequence, timing, item construction and so on?
  • How will you validate MCQ tests and questions, for the discriminatory value of items?
  • How will the department review and endorse MCQ use and practices?

Construct an MCQ test

Constructing effective MCQ tests and items takes considerable time and requires scrupulous care in the design, review and validation stages. Constructing MCQ tests for high-stakes summative assessment is a specialist task.

For this reason, rather than constructing a test from scratch, it may be more efficient for you to see what other validated tests already exist, and incorporate one into any course for which numerous decisions need to be made.

In some circumstances it may be worth the effort to create a new test. If you can undertake test development collaboratively within your department or discipline group, or as a larger project across institutional boundaries, you will increase the test's potential longevity and sustainability.

By progressively developing a multiple-choice question bank or pool, you can support benchmarking processes and establish assessment standards that have long-term effects on assuring course quality.

Use a design framework to see how individual MCQ questions will assess particular topic areas and types of learning objectives, across a spectrum of cognitive demand, to contribute to the test's overall balance. As an example, the "design blueprint" in Table 2 provides a structural framework for planning.

Table 2: Design blueprint for multiple choice test design (from the Instructional Assessment Resources at the University of Texas at Austin)

Cognitive domains
(Bloom's Taxonomy)

Topic A

Topic B

Topic C

Topic D

Total items

Percentage of total

Knowledge

1

2

1

1

5

12.5

Comprehension

2

1

2

2

7

17.5

Application

4

4

3

4

15

37.5

Analysis

3

2

3

2

10

25.0

Synthesis

 

1

 

1

2

5.0

Evaluation

   

1

 

1

2.5

TOTAL

10

10

10

10

40

100

Use the most appropriate format for each question posed. Ask yourself, is it best to use:

  • a single correct answer
  • more than one correct answer
  • a true/false choice (with single or multiple correct answers)
  • matching (e.g. a term with the appropriate definition, or a cause with the most likely effect)
  • sentence completion, or
  • questions relating to some given prompt material?

To assess higher-order thinking and reasoning, consider basing a cluster of MCQ items on some prompt material, such as:

  • a brief outline of a problem, case or scenario
  • a visual representation (picture, diagram or table) of the interrelationships among pieces of information or concepts
  • an excerpt from published material.

You can present the associated MCQ items in a sequence from basic understanding through to higher-order reasoning, including:

  • identifying the effect of changing a parameter
  • selecting the solution to a given problem
  • nominating the optimum application of a principle.

You may wish to add some short-answer questions to a substantially MCQ test to minimise the effect of guessing by requiring students to express in their own words their understanding and analysis of problems.

Well in advance of an MCQ test, explain to students:

  • the purposes of the test (and whether it is formative or summative)
  • the topics being covered
  • the structure of the test
  • whether aids can be taken into the test (for example, calculators, notes, textbooks, dictionaries)
  • how it will be marked
  • how the mark will contribute to their overall grade.

Compose clear instructions on the test itself, explaining:

  • the components of the test
  • their relative weighting
  • how much time you expect students to spend on each section, so that they can optimise their time.

Quality assurance of MCQ tests

Whether you use MCQ tests to support learning in a formative way or for summative assessment, ensure that the overall test and each of its individual items are well aligned with the course learning objectives. When using MCQ tests for summative assessment, it's all the more critical that you assure their validity.

The following strategies will help you assure quality:

  • Use a basic quality checklist (such as this one from Knapp & Associates International) when designing and reviewing the test.
  • Take the test yourself. Calculate student completion time as being four times longer than your completion time.
  • Work collaboratively across your discipline to develop an MCQ item bank as a dynamic (and growing) repository that everyone can use for formative or summative assessments, and that enables peer review, evaluation and validation.

Use peer review to:

  • consider whether MCQ tests are educationally justified in your discipline
  • critically evaluate MCQ test and item design
  • examine the effects of using MCQs in the context of the learning setting
  • record and disseminate the peer review outcomes to students and colleagues.

Engage students in active learning with MCQ tests

Used formatively, MCQ tests can:

  • engage students in actively reviewing their own learning progress, identifying gaps and weaknesses in their understanding, and consolidating their learning through rehearsal
  • provide a trigger for collaborative learning activities, such as discussion and debate about the answers to questions.
  • become collaborative through the use of technologies such as electronic voting systems
  • through peer assessment, help students identify common areas of misconception within the class.

You can also create activities that disrupt the traditional agency of assessment. You might, for example, require students to indicate the learning outcomes with which individual questions are aligned, or to construct their own MCQ questions and prepare explanatory feedback on the right and wrong answers (Fellenz, 2010).

Ensure fairness

Construct MCQ tests according to inclusive-design principles to ensure equal chances of success for all students. Take into account any diversity of ability, cultural background or learning styles and needs.

  • Avoid sexual, racial, cultural or linguistic stereotyping in individual MCQ test items, to ensure that no groups of students are unfairly advantaged or disadvantaged.
  • Provide alternative formats for MCQ-type exams for any students with disabilities. They may, for example, need more time to take a test, or to be provided with assistive technology or readers.
  • Set up contingency plans for timed online MCQ tests, as computers can malfunction or system outages occur at any time. Also address any issues arising from students being in different time zones.
  • Develop processes in advance so that you have time to inform students about the objectives, formats and delivery of MCQ tests and, for a summative test, the marking scheme being used and the test's effect on overall marks.
  • To reduce the opportunity for plagiarism, specify a randomised question presentation for MCQ, so that different students will be presented with the same content in a different order.

Resources

Writing Good Multiple Choice Test Questions (Vanderbilt University).

Bazemore-James, C. M., Shinaprayoon, T., & Martin, J. (2016). Understanding and supporting students who experience cultural bias in standardized tests. Trends and Issues in Academic Support: 2016-2017.

Fellenz, M.R. (2004). Using assessment to support higher level learning: the multiple choice item development assignment. Assessment and Evaluation in Higher Education 29(6), 703-719.

Krathwohl, D. R. (2002). A revision of Bloom's Taxonomy: An overview. Theory into Practice, 41(4), 212-218.

LeJeune, J. (2023). A multiple-choice study: The impact of transparent question design on student performance. Perspectives In Learning, 20(1), 75-89.

Velan, G.M., Jones, P., McNeil, H.P. and Kumar, R.K. (2008). Integrated online formative assessments in the biomedical sciences for medical students: benefits for learning. BMC Medical Education 8(52).

  • New staff
  • Teaching for learning
  • Assessment
    • Assessment toolkit search
    • Digital assessment at UNSW
    • Designing assessment
    • Assessment methods
    • Grading & giving feedback
    • Reviewing assessment quality
    • Spotlight on assessment
    • Assessment development framework
  • Educational settings

Events & news

Using the “Multiple-layer feedback Model”
LinkedIn: How can this platform work for you?
More
Back to top
  • Print
  • Home
  • About
  • Teaching
  • Educational Technology
  • Events & news
  • Awards
  • Professional development
  • Contacts

AUTHORISED BY PRO VICE-CHANCELLOR EDUCATION
UNSW CRICOS Provider Code: 00098G, TEQSA Provider ID: PRV12055, ABN: 57 195 873 179
Teaching at UNSW, Sydney NSW 2052, Australia Telephone 9385 5989

ACKNOWLEDGEMENT OF COUNTRY
UNSW respectfully acknowledges the Bidjigal, Biripi, Dharug, Gadigal, Gumbaynggirr, Ngunnawal and Wiradjuri peoples, whose unceded lands we are privileged to learn, teach and work on our UNSW campuses. We honour the Elders of these Nations, as well as broader Nations that we walk together with, past and present, and acknowledge their ongoing connection to culture, community and Country.
- The Uluru Statement
 


  • Privacy Policy
  • Copyright & Disclaimer
  • Accessibility
  • Complaints
  • Site Map
  • Site Feedback
Page last updated: Monday 16 December 2024