Academic integrity in online assessments
The increasing need for online delivery of assessments presents a challenge for academics and universities to ensure the integrity of these assessments. Various approaches can be undertaken to minimise cheating and plagiarism.
One strategy to reduce cheating is to improve student satisfaction. Students who are dissatisfied with their teaching and learning environments are more likely to engage in contract cheating (Bretag et. al., Studies in Higher Education, 2019). Getting to know our students enables us to foster open and trusting relationships. It is also important to incentivise students to take on increased responsibility for their own learning and see the value of completing the exercise for their own self development and learning.
Another strategy to minimise cheating is through careful assessment design. For example, randomisation techniques may be useful in assessments that contain convergent tasks. Coupled with appropriate assessment conditions, these may reduce the possibility of collusion. It is worth noting that no single approach is likely to stop students from cheating and that continuous adjustments and refreshments are needed.
In the case studies below, UNSW academics outline the tools, techniques and strategies they employed to improve deep learning and minimise cheating and misconduct in their online assessments.
If you have approaches or feedback to share, we'd love to hear from you! Contact us here.
Case studies
Open collaboration during exam
Dr David Kellerman of the School of Mechanical and Manufacturing Engineering throws the traditional rule book out the window when he assesses his students in their final exam in ENGG2400 Mechanics of Solids within an open online discussion environment in Microsoft Teams.
Dr David Kellerman of the School of Mechanical and Manufacturing Engineering throws the traditional rule book out the window when he assesses his students in their final exam in ENGG2400 Mechanics of Solids within an open online discussion environment in Microsoft Teams.
Mechanics of Solids is a second-year foundational engineering course with approximately 600 students. The nature of the curriculum is such that the topics are closely mandated, and the curriculum being assessed cannot be changed. When delivering the course outline, the questions cannot simply be redesigned into creative, open book assignments, as the learning outcomes that need to be demonstrated are principally focused on mathematical problem solving using a large spectrum of methods. Furthermore, the exam cannot be turned into a two-week assignment as this would enable students plenty of time to collude and to contract-cheat. Whilst STACK is a powerful tool for assessing math-based questions, it can ultimately be reverse-engineered by designing a spreadsheet containing the formulas required to solve the questions. Contract cheating is cheap and easy, and collusion occurs regularly via private chat groups. Take-home assessments are even easier to collude in because of their longer timeline. Additionally, proctoring is a dead end and makes for an unpleasant student experience. Society, including students, would like to operate based on trust — no one likes to be watched by Big Brother.
The solution to this conundrum needed to work for remote examination with no invigilation or online proctoring — keep the fundamental exam format but treat the students as trusted collaborative professionals. The solution also had to be less technological and more cultural. Namely, there was a need to flip the exam culture such that students saw academics as being on their side and there to help them achieve high marks, not battle against them.
In my course’s final online exam, students have access to a Microsoft Teams channel as they complete their exam paper. The only rule applied to this chat room is that posts must not be illegal — e.g. no copyright infringement, no hate speech, etc. Students are allowed to ask for help from each other, post specific questions regarding how to solve the problems, post photographs of their working out and indeed their answers. The exam layout is shown in Figure 1 below.
Figure 1: Nine questions, each a Microsoft Forms Webpart, are embedded in Heading 1 sections of a Teams SharePoint site. Left of these is a navigation jump list and countdown timer. This is pinned as a tab and can have the exam chat docked to the right.
In T2 2020, 770 chat comments were made during the final exam. It was clear that the best students that genuinely wanted to learn — i.e. the ones that were not inclined to copy answers and use spreadsheet calculation templates in traditional assessments — were the first students to start using the chat room immediately to gain access to the support of myself and my teaching assistants. These students would share information collegially whilst avoiding crossing perceived academic boundaries. The second-best students wanted to learn from the best students and use the chat in this way. This effect cascaded down to all students and removed any advantage from being in a private chat. By facilitating this free and open discussion in an official course Microsoft Teams channel, private Facebook and WhatsApp chat rooms were depreciated, yet the behaviours in the official chat were admirable.
My strategy for assessing students in their final exam depended on a scaffolded approach I used throughout the teaching term. The final exam was designed as follows:
- The exam booklet was distributed to students all over the world by post.
- The exam booklet contained the students’ worked solutions from the tutorials and block tests throughout the term.
- Students submitted scans of their tutorial work throughout the term, effectively providing a handwriting reference to compare with the final exam. This provides invisible biometric confirmation that the student is at least the person writing the solution.
- These approximately 200 solutions are all completed by the students themselves, and in the chat, other students would make comments like “Question 3 is similar to tutorial problem 3.24”, meaning that every students is referencing their own past work, yet also working collaboratively.
- The exam booklet also contained equations for reference.
- The exam booklet contained empty space for the final exam and final solution fields.
- The exam booklet was printed with a QR code and had an identifier for question branching purposes. Students scanned the QR code to access the exam questions online. The QR codes also had a hash encrypted reference to the student’s identity, uniquely posted to that student.
- Students wrote down their workings in the booklet and then uploaded a photograph of their workings. Final answers were submitted online in an embedded Microsoft Form on Teams.
- Final answers in Forms were locked in, yet an additional 30 minutes was granted to scan and upload written work. Because the final answer is locked in, there is very little opportunity to change mathematical working in the 30-minute scanning window.
- The exam was delivered via SharePoint, nested inside Teams as a tab.
- The exam had branching with slight variations in questions.
- A combination of Office Lens (or a scanning app of the students’ choosing) with Teams Assignments was used for submission of working out (1 PDF per question). See Figure 2.
Figure 2: A PDF made using the Office Lens app on a phone. One of nine questions uploaded to Teams Assignments.
A near perfect asymmetric Gaussian (bell-curve) distribution was achieved in the final exam with an average mark, pass and HD all being typical for the course were it to be run on face-to-face on campus. Additionally, my course achieved 100.0% satisfaction in the myExperience student survey. My students commented that this assessment approach was a very forward-thinking approach to online examination.
Some technical challenges occurred due to using SharePoint in such a way that it was not purpose built. These technical challenges were typical of a new system and have since been resolved. Students were polled to determine which system was better: With 100 respondents 70% said SharePoint was better; 20% were indifferent; 10% said Moodle was better. In T3 2020 I will use the same approach in my online final exam due to the perfect student satisfaction score I achieved for ENGG2400 in T2 2020.
Exam randomisation
Associate Professor Mark Humphery-Jenner of the UNSW Business School shares how he moved his face-to-face exams online in FINS3616 International Business Finance whilst minimising academic misconduct using various randomisation techniques.
Associate Professor Mark Humphery-Jenner of the UNSW Business School shares how he moved his face-to-face exams online in FINS3616 International Business Finance whilst minimising academic misconduct using various randomisation techniques.
***
During T1 2020, the subject shifted online in response to the COVID-19 pandemic, thus mandating that the exams also run online. This created problems with students collaborating with each other and having access to a myriad course materials and online resources during the exam.
I redesigned the exams such that they were (1) problem-based, (2) randomised in several ways, and (3) required students to agree to academic integrity rules before they could undertake the exams. For one of the three exams, I also prevented students from navigating freely through the questions and forced them to answer the questions sequentially. The problem-based questions were designed to make it more difficult for students to rote-regurgitate lecture materials and to make collaboration during the exam so time-consuming as to be self-defeating.
When randomising exams, I did the following:
- Using the “calculated” question type in Moodle, the numbers within individual questions were randomised. Thus, every student had unique questions, making it impossible to simply share answers.
- For each “question”, I had several versions, all phrased differently and with different names. This made the questions appear more difficult, further hindering collaboration.
- Within the exam, I randomised the question order. This does not in itself stop collaboration, but it makes it time-consuming.
I have created some resources around how to create online exams, reduce cheating in exams, and how to randomise numbers in exams.
The outcome of my redesigned exams was that the exam averages were in line with an expected grade distribution when the exams are conducted in person. This demonstrated that these methods were useful in minimising academic misconduct in online exams.
Several students responded negatively. Some disliked having to answer exam questions sequentially. Students also complained about typographical errors arising from erroneous pluralisation (or lack thereof) due to the application of random numbers. Some even tried to make these typos out to be a fundamental flaw so as to invalidate the exam. Several students gave the impression that attempts to maintain academic integrity were inherently repulsive. A couple of students disliked the problem questions, but there was also contradictory feedback: some liked the questions using pop culture-related names, but some did not.
In terms of what worked well using this online assessment approach, cheating appeared to decrease judging by the course grade distribution. In addition, problem-based questions are more authentic and require students to apply their knowledge to real-world, fact-based scenarios. Furthermore, randomisation worked well overall, but it is important to check for typos due to incorrect pluralisation.
In terms of what could be improved, it is currently extremely time-consuming to create randomised exams and an argument could be made that this time is essentially wasted, as the questions likely will leak online. When using the “calculated” question type in Moodle, typos due to incorrect pluralisation are difficult to avoid. (It is worth noting that the “STACK” question type can be used to prevent these typos.) Furthermore, formula creation in Moodle is time-consuming and proof-reading is essential.
My overall thoughts are that exam randomisation is a good way to reduce cheating in numerical exams. However, it is extremely time-consuming and requires a significant amount of proof-reading and error checking. Some students seem to object to the mere thought of reducing cheating and decompensate when an exam is the least bit challenging. I would recommend that the faculty limit the number of exams so as to limit the time they must spend on this task.
Online Assessment using HTML/CSS/JavaScript
Associate Professor Mario Attard and Dr Xiaojun Chen of the UNSW School of Civil and Environmental Engineering share how they maintained the integrity of the assessments in their course Mechanics of Solids ENGG2400 using HTML/CSS/JavaScript.
Associate Professor Mario Attard and Dr Xiaojun Chen of the UNSW School of Civil and Environmental Engineering share how they maintained the integrity of the assessments in their course Mechanics of Solids ENGG2400 using HTML/CSS/JavaScript.
***
With the increasing need for online assessment, the existing online modules provided by Moodle cannot meet advanced requirements. Modern web-based technologies allow us to develop highly customised webpages, which can compensate for the drawbacks of Moodle. Current web API can help us generate random questions, track student records, and identify potential cheating behaviour. It can be easily embedded in Moodle, linked to the gradebook, and can be delivered to other eLearning platforms as well.
The quiz and exam pages were written in HTML/CSS/JavaScript. Its layout, format and question types were not limited by those available in Moodle. Questions were randomly generated, and randomisation was not limited to numbers; texts, figures and charts could also be randomised. Answers were also not limited to numbers, multiple choice, etc. Answers could also be drawings, plotting diagrams or inputs of mathematical formulae. Students can have totally different questions which have to be solved using different algorithms and methods, and this reduces the chances of students collaborating with each other. This was achieved through JavaScript coding.
Randomisation allows a signature to be associated with each student therefore making it possible to identify students who upload questions to the internet. Having a system which associates each student uniquely with each examination question therefore discourages the uploading of questions.
The start time for any online quiz was made the same for all students independent of their location. The time period to complete each online quiz was also made tight to discourage collaboration. It made it difficult to seek help at least from students who were also completing the
Figure 1: Randomisation in figures and charts. quiz at the same time because of the time restraint and the randomisation of the quiz parameters.
Using Web API, we were able to fetch students’ IP addresses and MAC addresses, and limit or disable their browser usages. The system and geolocation data collected during the exam were cross-checked using programming languages to prevent cheating. Students could not access the online assessment on two computers simultaneously. This further reduced the chances of cheating. We identified some abnormal behaviours during the exam—a few students were accessing the exam using two different devices in different locations. This was disabled immediately. Students found the assessment fair since the chances for cheating were reduced.
An advanced marking system was also developed, so that partial and carry-over marks could be identified and given to the students. The students were happy with the marking system as it allowed them to receive some mark even if they made small mistakes. Students received advanced feedback using their input in the calculations and comparing with the correct solution. Overall. the students were happy with the quiz feedback.
Some students were not familiar with the new online system and had some difficulties at the beginning. In the future, detailed instructions and a mock exam will be used to familiarise students with the new system.
Maintaining integrity of final exam without invigilation
Associate Professor Eric C.K. Cheung of the UNSW Business School shares how he randomised the final exams in his course ACTL2111/ACTL5102 Financial Mathematics to maintain its integrity without invigilation.
Associate Professor Eric C.K. Cheung of the UNSW Business School shares how he randomised the final exams in his course ACTL2111/ACTL5102 Financial Mathematics to maintain its integrity without invigilation.
***
In 2020 T1, the final exam needed to be moved online because of COVID-19. Preserving the integrity of the final exam was extremely important – especially as the course has accreditation arrangements with the Actuaries Institute. On the other hand, I was concerned that the use of online invigilation services (e.g. Examity) would put students under unnecessary stress due to the requirements on internet bandwidth and special equipment (e.g. writing board and camera). These motivated me to design an online exam with randomisation and shuffling to minimise the chances of cheating without invigilation, and this was the first time I implemented such an exam format.
The online exam was created on the Moodle Quiz platform where students input their answers in text boxes. It was a two-hour exam that consisted of two parts: Part I had 18 questions, and each demanded a single numerical answer; Part II had 3 questions, and each demanded a short essay. Each question in Part I consisted of three versions, e.g. Q1 has versions Q1A, Q1B and Q1C, and each student will be randomly assigned one version. The numerical information provided in these versions was different, e.g. Q1A, Q1B and Q1C may ask the student to do the calculations under an interest rate of 5%, 6% and 7% respectively. Therefore, these versions are indeed of identical difficulty (to ensure fairness in terms of accreditation) but the correct numerical answers are different.
The order of the questions was also further shuffled for each student, so Student 1 may get Q5C, Q9A, Q1B etc. while Student 2 may get Q12A, Q3C, Q7C etc. With a two-hour limit along with the randomisation and shuffling taking place, students would not be able to take advantage by working together. The essay-type questions in Part II were shuffled as well, and plagiarism can be detected via the usual means.
Figure 1: Moodle Quiz set-up – Randomise within questions and shuffle within sections
Before the exam, students were concerned about the numerical section because only the final answers (but not the steps) would be graded. They were afraid of getting a very poor grade despite getting the steps correct. But it turned out that the students’ performance and grade distribution were comparable to those in the traditional exams where partial marks are rewarded for correct steps but incorrect answers. The success was due to a very good mix of questions with different levels of difficulty.
Randomisation and shuffling of questions worked extremely well. The numerical answers were graded automatically by the Moodle platform, while I graded the short essays manually. No issues were identified.
After the online exam, teachers needed to check whether there were significant differences in the performance of the students in the different versions of the same question. For example 42.53%, 42.86%, and 44.44% of students got the correct answers for three versions of Q2.
Figure 2: Examining the differences in the performance of the students across different versions of the same question
My exam was the first to take place as it was scheduled at 9:00 am on the first day of the exam period. As some students reported to me (after the exam) that the loading time between exam pages was longer than expected (up to 30 seconds), I immediately shared such feedback with other teachers in the School. Teachers who had designed a similar type of exam for their courses proceeded to provide students with some additional time to complete the exam. In T2, I have designed a similar exam to T1 but have added 10 extra minutes to take into account loading time latency issues.
There were only four cases of special consideration that were approved, which is a very good result given that there were more than 250 students in the class. It should be noted that these four cases were not related to the design of the online exam.
The small number of special consideration cases can be attributed to the clear exam instructions (e.g. the number of significant figures in numerical answers) and a sample online exam provided to the students well before the exam. This sample exam allowed students to familiarise themselves with the flow of the exam and what they should expect on the day of the exam, thus enabling them to prepare their study accordingly.
Summary and Reflections
In summary, it takes much more time and efforts to prepare an online exam on Moodle quiz platform than a traditional exam. If properly done, grading can possibly be (partially) automated, and this can be viewed as a shift of some grading time to the design of the online exam.
A considerable amount of testing of the online exam is required beforehand, and it can be helpful to have a colleague with knowledge of the course to go through the online exam platform and provide feedback.
It is important to provide students with a sample online exam so they can get familiar with the platform. Clear and detailed instructions on the online exam should be provided to the students early, such as how many significant figures are required in numerical answers and the tolerance of errors in automated grading.
Using transcripts in video submissions
Dr Lynn Gribble of the UNSW Business School shares how she uses transcripts to maintain the integrity of video submissions in COMM5010 Strategy, Marketing and Management that she co-convenes with Dr Heather Crawford
Dr Lynn Gribble of the UNSW Business School shares how she uses transcripts to maintain the integrity of video submissions in COMM5010 Strategy, Marketing and Management that she co-convenes with Dr Heather Crawford
Please note the video can only be viewed by UNSW staff.
Using videos, analytics and a written assessment task to teach students about academic integrity
Dr Lynn Gribble of the UNSW Business School shares how she uses videos, analytics and a written assessment task in MGMT5050 Professional Skills and Ethics to teach students about academic integrity.
Dr Lynn Gribble of the UNSW Business School shares how she uses videos, analytics and a written assessment task in MGMT5050 Professional Skills and Ethics to teach students about academic integrity.
Please note the video can only be viewed by UNSW staff.
Scaffolding Weekly Formative Tasks Leading to a Summative Assignment Online
Professor Bob Fox, Curriculum Lead from the Office of the Pro Vice-Chancellor (Education & Student Experience) shares how he uses scaffolding to maintain the integrity of the assessment tasks in EDST5126 Issues in Higher Education, a core course of MEd(HE) in the School of Education, Faculty of Arts and Social Sciences.
Professor Bob Fox, Curriculum Lead from the Office of the Pro Vice-Chancellor (Education & Student Experience) shares how he uses scaffolding to maintain the integrity of the assessment tasks in EDST5126 Issues in Higher Education, a core course of MEd(HE) in the School of Education, Faculty of Arts and Social Sciences.
This case study revolves around a major assignment completed by students following a part-time Masters in Education course. The students were all in-service educational professionals. Each week students were presented with an educational issue, such as ‘governance and policy’, ‘role of technology in learning’, ‘changing learning environments’ etc. to research and write up. The final assessment required students to reflect on 1. their learning from each task and 2. how the learning impacted on their professional practice. This major assignment constituted 40% of the final marks for the course. Every student was required to establish a personal blog, linked to the Moodle course page and to submit their work on the weekly tasks for peer review and teacher comment. The resultant feedback was submitted to the student blog by the end of each week. Students were then required to adjust their weekly submissions, based on feedback given by their peers and the teacher. Records of weekly submissions, feedback given and improvements made were held in student folders within Moodle. Towards the end of the term students were required to make final edits to all their weekly submissions, taking into account feedback received, and to combine their completed tasks into a final cohesive reflective report for submission.
Requiring students to submit formative weekly tasks that led to a major summative end of term assessment, evidenced student work that revealed an individual, regular, scaffolded progression and improvements to work across the entire term. According to student evaluations, the constant review, feedback and adjustment process made the assessment an individually relevant and professionally authentic experience. Students appreciated completing small weekly tasks and getting regular, ongoing feedback to their work, knowing the weekly tasks and feedback would help them build and improve on the major end of term assignment. During the term, students were asked to reflect on the usefulness and effectiveness of the tasks and the weekly feedback they were given by their peers: overall students were positive on both counts. Any critical reflections made by students were discussed in the online class and where possible adjustments were made to the process and tasks.
Each time I have used this scaffolded approach to assignment tasks, I have made changes, based on feedback received from students, discussions with colleagues, readings and my own experiences. Every cohort I find is different from the previous, so new ideas to enhance the assessment and tasks continue to evolve. To assist students in providing useful feedback to each other’s weekly tasks, the use of Biggs’ SOLO taxonomy was introduced as a key component of the feedback rubric. A link to a 5-minute video explaining how SOLO can be introduced to students to structure peer review as well as helping students understand a key part of the teacher framework that can be used in marking assignment work: Assessing Qualitative Differences in Student Work using SOLO
Note:
-
Teacher feedback to weekly individual students tasks tended to be general, not individual, and therefore not too onerous, along the lines of: ‘The strongest submissions included….’ OR ’The weaker submissions needed to include.....’. Peer feedback was required to be more detailed.
-
An extended variation of this scaffolded approach to assessment can be found in: Trinidad, S., & Fox, R. (2007). But did they learn? Assessment driving the learning, technology supporting the process. In S. Frankland (Ed.), Enhancing Teaching and Learning through Assessment (pp. 382-391). The Netherlands: Springer.
-
The design of this assessment is generic and can be relatively easily adapted. Please contact Professor Fox for further information: [email protected]
Resources
- Visit the Contract Cheating and Assessment Design webpage to learn about evidence-based framework for assessment design that minimises contract cheating.
- Read TEQSA's good practice in creating online assessments to minimise contract cheating.
- Access the Conduct & Integrity Office SharePoint site for more information on any of the above.
- The Academic Integrity Teams site is a useful resource where you can ask Faculty Integrity Partners general questions, keep in touch with latest advice and webinars and engage in a community of practice.
- Information for students can be found on the Current Students Academic Integrity at UNSW webpage and Arc@UNSW.
Contacts
- Contact the Faculty Integrity Partners in the Conduct & Integrity Office if you have any related questions.
- Contact your School Student Integrity Adviser who is responsible for providing oversight and consistency in handling plagiarism within the School as well as providing academic guidance and advice on assessments containing plagiarism.