Introduction
Generative AI offers exciting possibilities for both teachers and students to innovate and streamline their work at university. As teachers, we must encourage students to experiment and embrace emerging technologies, while recognising that students should not be overly dependent on any one technology. Demonstrating independent thought and applying knowledge remain essential to successfully attain the attributes of a university graduate. Hence, students must receive course-specific instructions on the different categories of permissible AI use.
We have prepared statements that you can use to inform students about the extent to which generative AI tools may be used in different assessment tasks. This text is designed to clarify UNSW’s position on assessment integrity given the rise in access to AI platforms, while also allowing for customisations for your course.
Please note: UNSW is rolling out a new enterprise course outline system (ECOS), which will help standardise course outlines and ensure consistent information, including regarding the use of generative AI in courses.
Key principles of AI usage in assessment
There are two key principles that can guide decisions for using AI in the design and delivery of assessment to students.
Be honest and transparent about the use of any AI tool where it would reasonably be expected that use of the tool would be disclosed.
This is particularly the case where such tools have not been commonly used in the past, such as in communications or feedback to students. This is in line with the academic practice of attribution. There may be reasons for non-disclosure such as privacy concerns or that the tool is generally expected to be used.
Ensure that any AI-based output is reviewed with all due diligence before being released or relied upon.
This is particularly important to ensure that you avoid bias and factual errors in the output.
Assessment Considerations for Course and Program Development
Developing assessment and feedback in an environment of pervasive generative AI is complex and multiple considerations are at play. In order to assist colleagues proposing new assessment in courses and programs, the following guide has been produced. It is focussed on factors that approving committees need to consider but it draws on wider principles for assessment and feedback.
Assessment Considerations for Course and Program Development
Read more on the central tenets of good assessment design. The considerations inside this document should be read in light of the five assessment principles set out in the UNSW Assessment Policy.
Assessments in Transition: Adapting to AI from Task to Program Level
In June 2024, Associate Professor Jason Lodge delivered a workshop to the UNSW Business School on reimagining assessments systematically over the course of a degree program in the era of Gen AI. Watch the recording at the link below.
Assessment design
The prospect of redesigning course activities and assessments to account for AI tools can seem overwhelming. To assist academics, Giordana Orsini and Nicole Nguyen (UNSW Engineering) and Karen Hielscher (UNSW Medicine & Health) have developed a checklist to help academics adapt course and assessment design in the age of generative AI. You can download the checklist from the link below.
The checklist has been created based on recommendations from a paper by Sasha Nikolic, Scott Daniel, Rezwanul Haque, Marina Belkina, Ghulam M. Hassan, Sarah Grundy, Sarah Lyden, Peter Neal and Caz Sandison, as well as valuable input from Dr May Lim (UNSW Faculty of Engineering).
Checklist for self-auditing assessments in the world of AI
Students need course-specific instructions laying out the extent to which they can use AI for assessments and learning activities. Being specific and transparent about AI use in the assessment instructions gives students approved parameters to work within, reducing stress and building confidence for students and teachers alike.
Based on extensive feedback across UNSW, six high-level categories have been defined for assessments that include some degree of AI use, as well as an additional category for assessments where AI is unlikely to be used.
Degree of Permission | Generative AI Permission Categories |
---|---|
No generative AI use permitted |
|
Use of generative AI permitted prior to development of final artefact |
|
Use of generative AI permitted in completing the assessment |
|
Assessments where AI is unlikely to be used |
|
Convenors can visit the ECOS Assessment Guidance page for further information on each category (UNSW zID required).
If your Program or School has their own standard wording to describe permitted degrees of AI use in assessments, please email [email protected] to provide this wording so it can be featured on the ECOS Assessment Guidance page.
In the video above, Professor Alex Steel, Director of AI Strategy in Education, explains the pedagogical underpinnings of the 6 categories of permissible AI use within assessments at UNSW.
What needs to be considered before integrating AI in my assessment?
When accounting for the use of generative AI in students’ classwork and assessments, carefully consider the following areas:
You must ensure equitable access
When ChatGPT or other forms of GenAI are accepted as part of an assessment, academics must ensure that they are easily accessible for everyone. There must be no physical, geographical or financial restrictions on students’ use of the tool. For example, while ChatGPT and many other tools are currently freely accessible, there are already subscription models for premium services that may have more features and create higher-quality/more accurate content.
You must be clear with teaching staff
Marking is often done by casual staff who might be new to the university and to marking generally. It is important to brief markers on the position on GenAI in a particular course and give support on what to look for, as well as what platforms can and cannot be used. School Student Integrity Advisers (SSIAs) and senior faculty members can assist in this process.
As a rule, markers must not use AI platforms which have not been approved by UNSW IT (like ChatGPT) for marking, feedback, or monitoring improper AI use. However, the platforms listed below have been made available for use by UNSW staff:
-
For marking and giving feedback on student work, UNSW IT has approved the use of Microsoft Copilot, because Copilot does not save the data entered after a session ends. This safeguards students' privacy and prevents their work from being used to train AI.
-
For detecting improper AI use, UNSW IT only authorises the use of Turnitin's AI Writing Detection Tool.
You must be clear with students
GenAI could possibly fall under “academic cheating services” if it produces a substantial part of work for a student that they were required to complete as original work themselves. That last part is important. Whether or not the use of AI is a form of cheating depends almost entirely on the instructions provided to students.
If you decide to allow AI use in assessments or learning activities, please consult our Categories of permitted AI use in assessment and advice in the Assessment Design section you can institute in your course.
How can I know if my assessment could be completed solely with AI?
Follow the steps below to review your assessment design and questions to see if they can be answered using GenAI tools:
- Use your zID and zPass to set up your Microsoft Copilot (with Commercial Data Protection) account.
- Input the assessment question and ask for an answer.
- Regenerate the answer a few times to see the variations the AI produces.
- Ask the tool to refine or expand on a previous output multiple times.
- If it is a long question, try breaking down the question into smaller sections.
- Try adding more specific instructions to the prompt regarding format, emphasis, etc.
- Ask the tool to generate a version of the question it cannot easily answer. Test that question and add your own tweaks.
Anyone who is building or training a bespoke Generative AI tool within their work at UNSW, or working closely with those who do, can consider incorporating Retrieval-Augmented Generation (RAG) into the tool. By incorporating trusted sources into a GenAI tool’s knowledge base, RAG can help make the tool more current and better tailored for specific purposes. Specifically, RAG can help achieve the following:
- Assessment Questions: RAG can be particularly useful for assessing whether an AI-generated answer to an assessment question is sufficient to fulfil the stated learning outcomes. By cross-referencing with trusted sources, it helps verify whether the tool’s response aligns with established knowledge.
- Current Information: RAG allows a GenAI tool to pull information from authoritative sources beyond its initial training data, ensuring that the tool reflects the most recent knowledge.
- Tailored Responses: When faced with specific tasks or assessment questions, RAG enables the tool to provide more precise and context-aware answers. It tailors responses by combining generative capabilities with pre-approved information.
If you determine that your assessment question or design is answerable solely with GenAI, consult the Checklist for self-auditing assessments in the world of AI for ideas about how to redesign assessments for your course.
AI and assessment validity
Assessment at UNSW facilitates and evaluates student learning (UNSW Assessment Policy). A valid assessment is one that achieves these aims. As a result, assessment tasks can be more or less valid, and validity is measured against multiple factors that are affected by context (Dawson et al 2024). The availability of generative AI tools has raised concerns about the validity of certain assessment types, as AI can generate or improve on submissions in ways that make it difficult to assess a student’s learning. Consequently, it is essential to be clear in assessment instructions/briefs exactly what learning is being assessed, and the extent to which the use of AI forms part of the skills and activities the students are expected to undertake.
For some tasks, the learning being assessed may be of a student’s understanding independent of AI. For others, the assessment may be of the ability to craft submissions using AI tools to a greater or lesser extent. The validity of the individual assessment will include whether the assessment has been designed in such a way that there is confidence that the student undertakes the assessment as intended. The overall validity of the degree program will include whether there is an appropriate mix of assessment types – both with and without AI involvement.
In the immediate environment, most assessments were designed to be completed without AI assistance. It is crucial to examine whether those tasks:
- remain likely to be completed without the use of AI,
- can be completed with use of AI without undermining their validity in evaluating student learning
- need some alteration to assure the student’s learning in an AI environment
- are no longer fit for purpose
Whether an assessment has validity in the current environment depends on the way the assessment is designed, how the environment the student completes it in is controlled, and the purpose for which the assessment has been set. There are no definitive conclusions on validity for any general assessment type. A detailed examination of each assessment is the appropriate long-term approach.
Assessment task susceptibility
What can be identified with more clarity is the degree to which a task is capable of being completed with AI. This may or may not be appropriate for student learning. Research has identified increasing forms of assessment that can contain tasks that are susceptible to AI completion. To give colleagues a heuristic introduction to the literature, we have categorised various assessment types as having low, moderate, or high susceptibility in the context of an education landscape influenced by generative AI.
Please explore the collapsible tables below to begin considering the susceptibility of your assessments. If your course assessments could fall into the high or moderate susceptibility categories, we recommend exploring our section above on Assessment Considerations for Course and Program Development.
High susceptibility tasks
Assessments that contain tasks with high susceptibility are easily completed with AI-generated outputs, making it difficult to ensure they measure what they intend to. These assessments often involve tasks that AI can easily perform, such as generating text or solving standard problems, thus compromising the integrity of the evaluation. Consider whether it is possible to add a less susceptible task to the assessment to increase its overall validity..
Task type | Information |
---|---|
Code submission | AI tools excel in simple coding tasks and explanations, allowing students to bypass understanding of core logic. Complex projects are more resistant to AI-generated content (Nikolic et al 2023). |
Developing research questions and hypotheses | AI-generated hypotheses tend to be general and lack the necessary specificity. Developing a research question and hypothesis requires more than knowledge and comprehension, but also necessitates higher order ‘Analysis’ and ‘Synthesis’ domains in Bloom’s taxonomy (Wang 2023). |
Online quizzes (multiple choice questions, short answer questions | AI tools like ChatGPT can handle multiple-choice, short answer questions and conceptual questions with increasing accuracy, especially in unsupervised environments. Complex questions or proctored settings are needed to maintain validity (Nikolic et al 2023). |
Written assessments (research-based) | AI tools can generate passable research papers and literature reviews, though they often fabricate references and lack critical depth, making this type of assessment vulnerable to automation (Nikolic et al 2023; Nikolic et al 2024). |
Moderate susceptibility tasks
Assessments with moderate susceptibility tasks are somewhat susceptible to AI influence but still retain a reasonable degree of accuracy in measuring student learning. These assessments may include tasks where AI can assist but not fully complete the work, requiring students to demonstrate understanding and application of concepts.
Task type | Information |
---|---|
Analysing, data interpretation and findings | AI tools can analyse and categorise information but lack depth in comparative analysis and integration with literature. They often fail to link ideas cohesively or provide thorough, multi-faceted analysis for higher-level tasks (Wang 2023; Thanh et al 2023). |
Essays and reports | AI-generated essays are comparable to human essays and reports in terms of structure and academic tone. However, AI outputs often lack depth, originality, and critical analysis, reducing validity for higher-level tasks (Revell et al 2024). |
Image-based questions | AI tools have limited ability to interpret or generate responses for image-based assessments, but this area is improving with technology (Nikolic et al 2023; Raftery 2023). |
Laboratory work and reports | AI can support structuring reports, but it cannot perform or replicate physical lab work and lacks the ability to interpret real data from lab experiments, preserving the need for human analysis and interpretation (Wang 2023). |
Numerical problem-solving | AI can perform well on basic numerical calculations, especially when enhanced with plugins like Wolfram, but struggles with complex, multi-step problems requiring specific reasoning or diagrams (Thanh et al 2023; Nikolic et al 2024). |
Project-based written assessments | AI can generate structured responses, but project-based assessments require deep contextual understanding and original contributions, limiting AI’s role (Nikolic et al 2023; Nikolic et al 2024). |
Reflective writing, personal essays and portfolios | AI struggles to replicate personal experiences or metacognitive reflections (Lye & Lim 2024; Nikolic et al 2024). AI cannot effectively compile or explain the context and personal insights for portfolios, making them less prone to automation (Lye & Lim 2024). However, a passable effort is possible with the correct input/training, and especially if the student applied enough effort to build upon the generated response. |
Low susceptibility tasks
Assessments with low susceptibility tasks are those that remain largely unaffected by generative AI tools. These assessments accurately measure student learning outcomes through methods that require critical thinking, problem-solving, and original thought, which AI cannot easily replicate.
Task type | Information |
---|---|
Close reading and critical analysis of literature | AI struggles with deep textual analysis, failing to offer nuanced interpretations, incorporate cultural context and secondary criticism, maintaining the validity of these assessments (Thanh et al 2023; Revell et al 2024). |
Complex visual artefacts | AI finds it difficult to generate unique non-text-based content like diagrams, mind maps, or long-form videos (Nikolic et al 2023; Mulder et al 2023). |
Context-specific tasks | AI struggles with tasks requiring personal experience, real-world scenarios, or detailed contextual analysis, preserving validity in these assessments (Mulder et al 2023; Nikolic et al 2024). |
Group work |
AI cannot effectively engage in group collaboration or contribute unique insights within a team, maintaining assessment validity for group work (Mulder et al 2023; Raftery 2023). |
In-class exams or handwritten assignments | AI cannot assist in real-time tasks such as in-class, handwritten tasks or timed quizzes, maintaining their integrity (Mulder et al 2023; Lye & Lim 2024; Revell et al 2024). |
Nested or staged assessments | Breaking larger tasks into smaller, staged assessments with feedback maintains validity, as AI cannot easily engage in iterative learning processes (Mulder et al 2023; Raftery 2023). |
Oral presentations, debates and interviews | AI tools can assist in scriptwriting, but students must present the work themselves, making the assessment format resistant to AI-generated content (Nikolic et al 2023). Interview-based assessments enhance security (Mulder et al 2023). |
Peer review |
Peer reviews require critical evaluation skills, which AI struggles with. They promote higher-order thinking, making AI-generated content less useful for peer review exercises (Mulder et al 2023). |
Process-oriented assessments | Shifting focus from final products to the learning process (e.g., process notebooks, reflection) reduces AI misuse and offers better insights into student thinking (Mulder et al 2023). |
Situational judgment scenarios | AI struggles with critical evaluation, particularly when assessments require judgment based on theoretical frameworks or contextualised knowledge (Thanh et al 2023). |
Viva voce exams and real-time Q&A | AI cannot participate in real-time verbal exchanges, keeping these assessments highly valid and secure (Lye & Lim 2024). |
AI and assessment validity: Video resources
Maintaining assessment validity: A step by step approach
Nexus Fellow and Lecturer, Dr Dhanushi Abeygunawardena from the Faculty of Science explains an approach that identifies any assessment adjustments required because of AI capabilities.
Testing assessment vulnerability
A team from Engineering research the validity of assessments by ethically AI hacking assessments by generating AI submissions and submitting them alongside student submissions for blind-marking.
The value of programmatic assessment amid AI disruption
Nexus Fellow and Associate Professor Priya Khanna Pathak explains the need to rethink how we assess student capabilities and competencies because of the AI disruption.
Source list: AI and assessment validity
Source | Summary |
---|---|
Dawson, P., Bearman, M., Dollinger, M., & Boud, D. (2024). Validity matters more than cheating. Assessment & Evaluation in Higher Education, 1–12. https://doi.org/10.1080/02602938.2024.2386662 | This article questions the importance allocated to cheating as a concept, arguing that the broader concept of validity is more central to assessment. The authors highlight how attempts to forestall cheating have the potential to undermine assessment validity. The central thesis is that the most important aspect of assessment is the ability to measure graduate capabilities accurately. |
Lye, C. Y., & Lim, L. (2024). Generative Artificial Intelligence in Tertiary Education: Assessment Redesign Principles and Considerations. Education Sciences, 14(6), 569. https://doi.org/10.3390/educsci14060569 | This article examines AI's benefits and challenges in assessments and proposes the Against, Avoid, Adopt (AAA) principle for redesigning assessments, arguing that policing AI will not address fundamental assessment issues. |
Mulder, R., Baik, C., & Ryan, T. (2024). Rethinking assessment in response to AI. Melbourne Centre for the Study of Higher Education. Retrieved from https://melbourne-cshe.unimelb.edu.au/__data/assets/pdf_file/0004/4712062/Assessment-Guide_Web_Final.pdf | This guide proposes redesigning assessments to reduce AI misuse without relying on high-stakes, closed-book exams. By introducing diverse, authentic, lower-weighted tasks and enhancing learning through feedback, educators can decrease cheating motivation. Strategies include reframing assessments as learning tools, diversifying assessed artefacts, and auditing uniquely human thinking processes, despite challenges like scalability and resourcing. |
Thanh, B.N., Vo, D. T. H., Nguyen Nhat, M., Pham, T. T. T., Trung, H. T., & Ha Xuan, S. (2023). Race with the machines: Assessing the capability of generative AI in solving authentic assessments. Australasian Journal of Educational Technology, 39(5). https://doi.org/10.14742/ajet.8902 | This study introduces a framework using Bloom's taxonomy to help educators assess genAI tools like ChatGPT and Bard in economics assessments. The findings urge reimagining assessments to emphasise higher-order skills and better prepare students. |
Nikolic, S., Daniel, S., Haque, R., Belkina, M., Hassan, G. M., Grundy, S., Lyden, S., Neal, P., & Sandison, C. (2023). ChatGPT versus engineering education assessment: A multidisciplinary and multi-institutional benchmarking and analysis of this generative artificial intelligence tool to investigate assessment integrity. European Journal of Engineering Education, 48(4), 559-614. https://doi.org/10.1080/03043797.2023.2213169 | This paper examines ChatGPT's impact on engineering education assessments by analysing its responses to prompts from ten subjects across seven Australian universities. ChatGPT passed some subjects and excelled in certain assessments, highlighting strengths and weaknesses in current practices and suggesting the need to revise assessments as AI capabilities rapidly advance. |
Nikolic, S., Sandison, C., Haque, R., Daniel, S., Grundy, S., Belkina, M., Lyden, S., Hassan, G. M., & Neal, P. (2024). ChatGPT, Copilot, Gemini, SciSpace and Wolfram versus higher education assessments: An updated multi-institutional study of the academic integrity impacts of Generative Artificial Intelligence (GenAI) on assessment, teaching and learning in engineering. Australasian Journal of Engineering Education. https://doi.org/10.1080/22054952.2024.2372154 | This multi-institutional study assesses genAI tools across ten engineering subjects. Repeating the study with tools like ChatGPT-4, Copilot, Gemini, SciSpace, and Wolfram showed increased performance, intensifying academic integrity issues but also offering teaching opportunities. ChatGPT-4 was notably well-rounded, and a GenAI Assessment Security and Opportunity Matrix was introduced. |
Raftery, D. (2023). Will ChatGPT pass the online quizzes? Adapting an assessment strategy in the age of generative AI. Irish Journal of Technology Enhanced Learning, 7(1). https://doi.org/10.22554/ijtel.v7i1.114 | This article shows ChatGPT achieving high scores on twelve first-year quantitative quizzes, especially when using plugins and correcting calculation errors. The implications for assessment strategies are discussed, including ethical integration of AI into education and the growing importance of prompt engineering skills. |
Revell, T., Yeadon, W., Cahilly-Bretzin, G., Clarke, I., Manning, G., Jones, J., Mulley, C., Pascual, R. J., Bradley, N., Thomas, D., & Leneghan, F. (2024). ChatGPT versus human essayists: An exploration of the impact of artificial intelligence for authorship and academic integrity in the humanities. International Journal for Educational Integrity, 20, Article 18. https://doi.org/10.1007/s40979-024-00161-8 | This study evaluates genAI's performance on an Australian criminal law exam by comparing AI-generated answers with student responses. GenAI performed below average in detailed legal and critical analysis but outperformed students in open-ended questions and essays, highlighting its capabilities and limitations for legal education and future workforce implications. |
Wang, J. T. H. (2023). Is the laboratory report dead? AI and ChatGPT. Microbiology Australia, 44(3), 144-148. https://doi.org/10.1071/MA23042 | This article introduces five prompts for educators to evaluate AI-generated scientific writing. Testing GPT-3.5 revealed well-organised but generalised content lacking specificity and integration of peer-reviewed literature. |
How can we detect students’ improper use of GenAI?
There are two answers to this question. One is in relation to detection tools and the other is in relation to human interaction with our students.
Digital Detection Tools
UNSW only authorises the use of Turnitin's AI Writing Detection Tool for detecting improper AI use. Students' work should not be uploaded to any other platform because:
- Only Turnitin has been approved by UNSW Cyber Security as protecting student privacy.
- The accuracy of other detection tools is extremely low. Even OpenAI have warned that their own detection tool sits at less than 30% accuracy. That’s an enormous error rate.
Many small companies and individuals have developed various detection tools. However, these tools' sites frequently lack clarity regarding the cookies and information they collect, their storage methods, and their data and privacy policies.
To be clear: if Turnitin identifies the potential for AI writing in a response, this is merely a flag for an academic investigation. Markers should rely on their own professional judgement, not the AI detection tool.
Human interactions
In the Conduct & Integrity Office’s experience, academics are great at picking up on the signs of cheating behaviours themselves:
- Assessment body: Generic answers, overly logical arguments, distorted truth and/or fabricated events/names/places are all tell-tale signs. Generally, a marker should watch out for a written assignment with a writing style that does not match the student's in-class written work or posts and/or expected capability for the stage of their degree.
- Reference list: Academics are also great at picking up fake references, or ones that don’t match reading lists or the assessment body text.
If you suspect cheating behaviours, have a discussion with the student – can they explain the steps they undertook to complete the assessment? Can they explain the work that they have done and what their submission means?
How can I investigate students’ improper use of AI?
Step 1: Initial checks
There are a number of approaches that can be used to detect or deter the use of GenAI to complete assessments. In the first instance, evidence of AI-generated work could justify a significant reduction in the mark for the assessment without the need to prove misconduct.
You can complete a sense-check on the submission, to see if it is in fact a false positive. This may include:
- Paying attention to references and checking to see if they exist. They are likely to be plausible but fake.
- Looking for formulaic answer structures. AI writes according to the level of instruction given by the author, but in all cases will produce the most probable version of that instruction. So, it will produce something that is generic to that format.
- Looking for an artificially even-handed treatment of sources. The AI cannot weigh the strength of the sources itself so it will generally avoid preferring one over the other. This is also dependent on the way the question has been developed.
- Checking the assessment instructions to confirm to what extent students are prohibited from using GenAI tools. If their use is permitted, they must be properly credited by the student, but the submission must be substantially the student’s own work.
As a final component of a sense-check, Turnitin has noted that some forms of formulaic writing can lead to false positives. See Turnitin's blog on understanding false positives.
Step 2: Check the signs
AI writing is generated from predictions of what the next word should be. The predictions in some instances are formulaic and correct, and in other instances unusual and incorrect. This means there is no one way of identifying AI-generated writing. There are, however, a number of signs to look for in sense-checking the Turnitin AI report:
- Very general statements or generic answers
- Rigid or formulaically logical arguments and answer structure
- No spelling mistakes, typos or other grammatical errors
- Fabricated names, dates, events, places
- Incorrect or fake referencing – references that do not match the text
- Inconsistent writing style throughout the assignment
- Inconsistent terminology, concepts, or expected capability for the stage of the degree
- Written assignment writing style that does not match the student's in-class verbal or written work
- Repeated errors
Look for an artificially even-handed treatment of sources. The AI cannot weigh the strength of the sources itself so it will generally avoid preferring one over the other. This is also dependent on the way the question has been developed.
These signs are not definitive proof of cheating using GenAI. It is important to review further and use academic judgment before taking any action. If you suspect that a student has cheated, it's best to speak with them directly and discuss your concerns.
Step 3: Discuss with the student
If you have a reasonable suspicion that the student has used GenAI improperly, it will be necessary to have a conversation with them about it. However, it is important to consider that improper use of AI does not necessarily represent a purposeful effort to cheat.
The teacher's guide on Conversation Starters with Students (resource below) provides several initial conversations and appropriate responses to each situation with some key takeaway points.
Seek, in as non-accusatory a way as possible, to validate potential unauthorised use by asking the student:
- for copies of drafts of their assignment
- whether the student can explain the steps undertook to complete the assessment
- whether the student can explain orally the work completed and what their submission means - so that they have demonstrated the learning outcomes for the assignment
Step 4: Contact your SSIA for further assistance
If there is suspicion a submission contains unauthorised AI-generated content, seek advice from your School Student Integrity Adviser (SSIA) on the process for managing /referring serious student misconduct matters.
Provide the SSIA with evidence of clear instructions provided to students that this degree of AI use in the assessment was unauthorised.
Step 5: Contact the Conduct & Integrity Office
If your concerns have not been addressed by the above steps, please refer the matter to the Conduct & Integrity Office via the Conduct & Integrity Office site, link in myUNSW, or by emailing [email protected] with:
- a copy of the assessment evidence that clear instructions were provided to the student that use of AI is unauthorised
- the reasons for the suspicions - including the results of any viva voce/oral assessment
Note: This process is similar to the approach for suspicions of contract cheating. The Conduct & Integrity Office can help navigate any suspicions and questions about these matters or advice on student conduct.
Where the unauthorised use of AI in an assessment is admitted or determined, a finding of serious student misconduct is made – as a breach of Principle 3 of the Student Code of Conduct which states that students must act with integrity, honesty and trust.
The penalties for a finding of this sort would be consistent with the penalties for Serious Student Misconduct and Serious Plagiarism – they would normally sit at 00FL for the course, suspension or exclusion depending on the matter.
Conversation starters with students
How can a student be penalised for improper use of GenAI?
Students must be provided with clear instructions stating whether they can use ChatGPT or other forms of GenAI for each assessment or learning activity and if so, for what purpose. This notification needs to be provided to students in writing and through multiple channels (e.g. written in assessment instructions and the course outline, communicated verbally in lectures and tutorials).
If you suspect that someone is using GenAI without proper authorisation (based on your professional judgment rather than a score from an AI detection tool), you must report it to the Conduct & Integrity Office. This will be considered a potential case of serious student misconduct, and it will be managed under the Student Misconduct Procedure.
Inclusive assessments and AI
Using AI for inclusive and engaging assessment methods
Lucy Jellema describes how teachers can use AI for designing assessments that foster inclusion of a diverse student cohort while also challenging students with different learning styles and teaching real-world skills.
Supporting inclusive assessment design with AI
Lucy Jellema explores innovative methods for leveraging AI in creating flexible and inclusive assessment rubrics. She discusses how you can enhance thoughtful assessment design by using AI to consider assessment from the students' perspective.