Introduction
Generative AI offers exciting possibilities for both teachers and students to innovate and streamline their work at university. As teachers, we must encourage students to experiment and embrace emerging technologies, while recognising that students should not be overly dependent on any one technology. Demonstrating independent thought and applying knowledge remain essential to successfully attain the attributes of a university graduate. Hence, students must receive course-specific instructions on the different categories of permissible AI use.
We have prepared statements that you can use to inform students about the extent to which generative AI tools may be used in different assessment tasks. This text is designed to clarify UNSW’s position on assessment integrity given the rise in access to AI platforms, while also allowing for customisations for your course.
Please note: UNSW is rolling out a new enterprise course outline system (ECOS), which will help standardise course outlines and ensure consistent information, including regarding the use of generative AI in courses.
Key principles of AI usage in assessment
There are two key principles that can guide decisions for using AI in the design and delivery of assessment to students.
Be honest and transparent about the use of any AI tool where it would reasonably be expected that use of the tool would be disclosed.
This is particularly the case where such tools have not been commonly used in the past, such as in communications or feedback to students. This is in line with the academic practice of attribution. There may be reasons for non-disclosure such as privacy concerns or that the tool is generally expected to be used.
Ensure that any AI-based output is reviewed with all due diligence before being released or relied upon.
This is particularly important to ensure that you avoid bias and factual errors in the output.
Assessment Considerations for Course and Program Development
Developing assessment and feedback in an environment of pervasive generative AI is complex and multiple considerations are at play. In order to assist colleagues proposing new assessment in courses and programs, the following guide has been produced. It is focussed on factors that approving committees need to consider but it draws on wider principles for assessment and feedback.
Assessment Considerations for Course and Program Development
Read more on the central tenets of good assessment design. The considerations inside this document should be read in light of the five assessment principles set out in the UNSW Assessment Policy.
Assessments in Transition: Adapting to AI from Task to Program Level
In June 2024, Associate Professor Jason Lodge delivered a workshop to the UNSW Business School on reimagining assessments systematically over the course of a degree program in the era of Gen AI. Watch the recording at the link below.
Assessment design
The prospect of redesigning course activities and assessments to account for AI tools can seem overwhelming. To assist academics, Giordana Orsini and Nicole Nguyen (UNSW Engineering) and Karen Hielscher (UNSW Medicine & Health) have developed a checklist to help academics adapt course and assessment design in the age of generative AI. You can download the checklist from the link below.
The checklist has been created based on recommendations from a paper by Sasha Nikolic, Scott Daniel, Rezwanul Haque, Marina Belkina, Ghulam M. Hassan, Sarah Grundy, Sarah Lyden, Peter Neal and Caz Sandison, as well as valuable input from Dr May Lim (UNSW Faculty of Engineering).
Checklist for self-auditing assessments in the world of AI
Students need course-specific instructions laying out the extent to which they can use AI for assessments and learning activities. Being specific and transparent about AI use in the assessment instructions gives students approved parameters to work within, reducing stress and building confidence for students and teachers alike.
Based on extensive feedback across UNSW, six high-level categories have been defined for assessments that include some degree of AI use, as well as an additional category for assessments where AI is unlikely to be used.
Degree of Permission | Generative AI Permission Categories |
---|---|
No generative AI use permitted |
|
Use of generative AI permitted prior to development of final artefact |
|
Use of generative AI permitted in completing the assessment |
|
Assessments where AI is unlikely to be used |
|
Convenors can visit the ECOS Assessment Guidance page for further information on each category (UNSW zID required).
If your Program or School has their own standard wording to describe permitted degrees of AI use in assessments, please email [email protected] to provide this wording so it can be featured on the ECOS Assessment Guidance page.
What needs to be considered before integrating AI in my assessment?
When accounting for the use of generative AI in students’ classwork and assessments, carefully consider the following areas:
You must ensure equitable access
When ChatGPT or other forms of GenAI are accepted as part of an assessment, academics must ensure that they are easily accessible for everyone. There must be no physical, geographical or financial restrictions on students’ use of the tool. For example, while ChatGPT and many other tools are currently freely accessible, there are already subscription models for premium services that may have more features and create higher-quality/more accurate content.
You must be clear with teaching staff
Marking is often done by casual staff who might be new to the university and to marking generally. It is important to brief markers on the position on GenAI in a particular course and give support on what to look for, as well as what platforms can and cannot be used. School Student Integrity Advisers (SSIAs) and senior faculty members can assist in this process.
As a rule, markers must not use AI platforms which have not been approved by UNSW IT (like ChatGPT) for marking, feedback, or monitoring improper AI use. However, the platforms listed below have been made available for use by UNSW staff:
-
For marking and giving feedback on student work, UNSW IT has approved the use of Microsoft Copilot, because Copilot does not save the data entered after a session ends. This safeguards students' privacy and prevents their work from being used to train AI.
-
For detecting improper AI use, UNSW IT only authorises the use of Turnitin's AI Writing Detection Tool.
You must be clear with students
GenAI could possibly fall under “academic cheating services” if it produces a substantial part of work for a student that they were required to complete as original work themselves. That last part is important. Whether or not the use of AI is a form of cheating depends almost entirely on the instructions provided to students.
If you decide to allow AI use in assessments or learning activities, please consult our Categories of permitted AI use in assessment and advice in the Assessment Design section you can institute in your course.
How can I know if my assessment could be completed solely with AI?
Follow the steps below to review your assessment design and questions to see if they can be answered using GenAI tools:
- Use your zID and zPass to set up your Microsoft Copilot (with Commercial Data Protection) account.
- Input the assessment question and ask for an answer.
- Regenerate the answer a few times to see the variations the AI produces.
- Ask the tool to refine or expand on a previous output multiple times.
- If it is a long question, try breaking down the question into smaller sections.
- Try adding more specific instructions to the prompt regarding format, emphasis, etc.
- Ask the tool to generate a version of the question it cannot easily answer. Test that question and add your own tweaks.
Anyone who is building or training a bespoke Generative AI tool within their work at UNSW, or working closely with those who do, can consider incorporating Retrieval-Augmented Generation (RAG) into the tool. By incorporating trusted sources into a GenAI tool’s knowledge base, RAG can help make the tool more current and better tailored for specific purposes. Specifically, RAG can help achieve the following:
- Assessment Questions: RAG can be particularly useful for assessing whether an AI-generated answer to an assessment question is sufficient to fulfil the stated learning outcomes. By cross-referencing with trusted sources, it helps verify whether the tool’s response aligns with established knowledge.
- Current Information: RAG allows a GenAI tool to pull information from authoritative sources beyond its initial training data, ensuring that the tool reflects the most recent knowledge.
- Tailored Responses: When faced with specific tasks or assessment questions, RAG enables the tool to provide more precise and context-aware answers. It tailors responses by combining generative capabilities with pre-approved information.
If you determine that your assessment question or design is answerable solely with GenAI, consult the Checklist for self-auditing assessments in the world of AI for ideas about how to redesign assessments for your course.
How can we detect students’ improper use of GenAI?
There are two answers to this question. One is in relation to detection tools and the other is in relation to human interaction with our students.
Digital Detection Tools
UNSW only authorises the use of Turnitin's AI Writing Detection Tool for detecting improper AI use. Students' work should not be uploaded to any other platform because:
- Only Turnitin has been approved by UNSW Cyber Security as protecting student privacy.
- The accuracy of other detection tools is extremely low. Even OpenAI have warned that their own detection tool sits at less than 30% accuracy. That’s an enormous error rate.
Many small companies and individuals have developed various detection tools. However, these tools' sites frequently lack clarity regarding the cookies and information they collect, their storage methods, and their data and privacy policies.
To be clear: if Turnitin identifies the potential for AI writing in a response, this is merely a flag for an academic investigation. Markers should rely on their own professional judgement, not the AI detection tool.
Human interactions
In the Conduct & Integrity Office’s experience, academics are great at picking up on the signs of cheating behaviours themselves:
- Assessment body: Generic answers, overly logical arguments, distorted truth and/or fabricated events/names/places are all tell-tale signs. Generally, a marker should watch out for a written assignment with a writing style that does not match the student's in-class written work or posts and/or expected capability for the stage of their degree.
- Reference list: Academics are also great at picking up fake references, or ones that don’t match reading lists or the assessment body text.
If you suspect cheating behaviours, have a discussion with the student – can they explain the steps they undertook to complete the assessment? Can they explain the work that they have done and what their submission means?
How can I investigate students’ improper use of AI?
Step 1: Initial checks
There are a number of approaches that can be used to detect or deter the use of GenAI to complete assessments. In the first instance, evidence of AI-generated work could justify a significant reduction in the mark for the assessment without the need to prove misconduct.
You can complete a sense-check on the submission, to see if it is in fact a false positive. This may include:
- Paying attention to references and checking to see if they exist. They are likely to be plausible but fake.
- Looking for formulaic answer structures. AI writes according to the level of instruction given by the author, but in all cases will produce the most probable version of that instruction. So, it will produce something that is generic to that format.
- Looking for an artificially even-handed treatment of sources. The AI cannot weigh the strength of the sources itself so it will generally avoid preferring one over the other. This is also dependent on the way the question has been developed.
- Checking the assessment instructions to confirm to what extent students are prohibited from using GenAI tools. If their use is permitted, they must be properly credited by the student, but the submission must be substantially the student’s own work.
As a final component of a sense-check, Turnitin has noted that some forms of formulaic writing can lead to false positives. See Turnitin's blog on understanding false positives.
Step 2: Check the signs
AI writing is generated from predictions of what the next word should be. The predictions in some instances are formulaic and correct, and in other instances unusual and incorrect. This means there is no one way of identifying AI-generated writing. There are, however, a number of signs to look for in sense-checking the Turnitin AI report:
- Very general statements or generic answers
- Rigid or formulaically logical arguments and answer structure
- No spelling mistakes, typos or other grammatical errors
- Fabricated names, dates, events, places
- Incorrect or fake referencing – references that do not match the text
- Inconsistent writing style throughout the assignment
- Inconsistent terminology, concepts, or expected capability for the stage of the degree
- Written assignment writing style that does not match the student's in-class verbal or written work
- Repeated errors
Look for an artificially even-handed treatment of sources. The AI cannot weigh the strength of the sources itself so it will generally avoid preferring one over the other. This is also dependent on the way the question has been developed.
These signs are not definitive proof of cheating using GenAI. It is important to review further and use academic judgment before taking any action. If you suspect that a student has cheated, it's best to speak with them directly and discuss your concerns.
Step 3: Discuss with the student
If you have a reasonable suspicion that the student has used GenAI improperly, it will be necessary to have a conversation with them about it. However, it is important to consider that improper use of AI does not necessarily represent a purposeful effort to cheat.
The teacher's guide on Conversation Starters with Students (resource below) provides several initial conversations and appropriate responses to each situation with some key takeaway points.
Seek, in as non-accusatory a way as possible, to validate potential unauthorised use by asking the student:
- for copies of drafts of their assignment
- whether the student can explain the steps undertook to complete the assessment
- whether the student can explain orally the work completed and what their submission means - so that they have demonstrated the learning outcomes for the assignment
Step 4: Contact your SSIA for further assistance
If there is suspicion a submission contains unauthorised AI-generated content, seek advice from your School Student Integrity Adviser (SSIA) on the process for managing /referring serious student misconduct matters.
Provide the SSIA with evidence of clear instructions provided to students that this degree of AI use in the assessment was unauthorised.
Step 5: Contact the Conduct & Integrity Office
If your concerns have not been addressed by the above steps, please refer the matter to the Conduct & Integrity Office via the Conduct & Integrity Office site, link in myUNSW, or by emailing [email protected] with:
- a copy of the assessment evidence that clear instructions were provided to the student that use of AI is unauthorised
- the reasons for the suspicions - including the results of any viva voce/oral assessment
Note: This process is similar to the approach for suspicions of contract cheating. The Conduct & Integrity Office can help navigate any suspicions and questions about these matters or advice on student conduct.
Where the unauthorised use of AI in an assessment is admitted or determined, a finding of serious student misconduct is made – as a breach of Principle 3 of the Student Code of Conduct which states that students must act with integrity, honesty and trust.
The penalties for a finding of this sort would be consistent with the penalties for Serious Student Misconduct and Serious Plagiarism – they would normally sit at 00FL for the course, suspension or exclusion depending on the matter.
Conversation starters with students
How can a student be penalised for improper use of GenAI?
Students must be provided with clear instructions stating whether they can use ChatGPT or other forms of GenAI for each assessment or learning activity and if so, for what purpose. This notification needs to be provided to students in writing and through multiple channels (e.g. written in assessment instructions and the course outline, communicated verbally in lectures and tutorials).
If you suspect that someone is using GenAI without proper authorisation (based on your professional judgment rather than a score from an AI detection tool), you must report it to the Conduct & Integrity Office. This will be considered a potential case of serious student misconduct, and it will be managed under the Student Misconduct Procedure.