Introduction
For most students, assessment defines the curriculum (Ramsden, 2003) and has a major influence on their overall learning experience. Traditional assessment paradigms in higher education have faced criticism for their unfavourable impact on the student experience, as well as for causing excessive workloads for both students and staff. The rise of generative AI, which challenges the integrity of current assessment practice, and the need for meaningful, work-relevant educational experiences necessitate a shift towards a more holistic and integrated system of assessment. UNSW is committed to enhancing the student experience through exploring innovative assessment approaches, and PAL is a key part of this endeavour.
What is PAL?
Unlike traditional high-stakes exams, PAL focuses on continuous feedback and low-stakes assessments that contribute to high-stakes decisions when aggregated (van der Vleuten et al., 2014). It systematically combines data from various assessment tasks over time to build a comprehensive view of a student’s progress towards achieving program-level outcomes.
Emerging evidence shows that programmatic assessment offers several long-term benefits, including increased competency growth, improved receptivity towards feedback, improved validation of capabilities, and reduced assessment overload when assessment tasks are carefully chosen and triangulated (Baartman et al., 2022; Khanna et al., 2023; Roberts et al., 2022; Samuel et al., 2023). However, misalignment between learning (which is longitudinal and self-directed) and assessment (which is inherently judgemental) can lead to adverse student experiences and diminishing transparency, trust and acceptability. In response, PAL aims to equip students with the ability to analyse their learning strengths and gaps, using collated assessment data to set specific learning goals and strategies (Schuwirth, L., Valentine, N., & Dilena, P., 2017).
Core principles of PAL
Based on extensive consultations with the Programmatic Assessment working group at UNSW in 2024, the university’s version of PAL is guided by six core principles:
L – Learning and learners first
E – Explicit and meaningful linkages and messaging
A – Aggregation of data
R – Reduction in workload
N – No high-stakes decisions based on a single task
S – Self-regulation of learning
It should be noted that these principles in no way replace time-honoured criteria for good assessment: (1) validity or coherence, (2) reproducibility or consistency, (3) equivalence and inclusivity, (4) feasibility, (5) educational impact, and (6) acceptability (Norcini et al., 2010).
PA of Learning versus PA for Learning
Evidence suggests that programmatic assessment has often been perceived as a process of collecting and collating assessment data for decision-making and curriculum quality assurance (van der Vleuten et al., 2014). It involves the systematic collection of data across various domains of competence to tailor instruction for learners’ remediation or acceleration (Li et al., 2017). This is PA of learning. However, UNSW’s approach is PA for learning, which prioritises students’ learning trajectories in key capabilities over the mere collection of assessment data. Effective PA optimises learning, aids decision-making on learner progression, and informs quality improvement activities within a program (Iobst & Holmboe, 2020).
How does PAL work?
In conventional systems, different types of data (e.g., knowledge, skills, professionalism) are often combined or averaged together, leading to a single score or grade. This makes it difficult to get a clear picture of what strengths or gaps in capabilities the score reflects. PAL, in contrast, aggregates data as it relates to specific competencies or learning outcomes, ensuring that each type of data is used appropriately to make holistic decisions about student progress (van der Vleuten et al., 2014). For example, in the left panel of the figure below, data regarding learning outcomes related to knowledge, skills, communication and professionalism (represented by oranges, pineapples, bananas and apples respectively) are aggregated within each module or unit of study. A student may progress from module to module with unsatisfactory achievement in any one or more of the learning outcomes, as progression is determined by aggregating data related to all learning outcomes in a “fruit salad”. In contrast, the right panel aggregates assessment data for each learning outcome over time, thereby enabling a clear view of students’ achievement of the required standards for each outcome. Progression or graduation decisions are then made based on satisfactory attainment of all outcomes.
Benefits of PAL
- Gives a holistic view of student progress: By tracking learners’ progress across multiple competencies, PAL creates a detailed picture of their development, enhancing both learning and decision-making (Roberts et al., 2022; van der Vleuten et al., 2014).
- Encourages active engagement: Continuous feedback motivates students to engage actively with their learning, reducing the stress associated with high-stakes exams (Lodge et al., 2023).
- Reduces workload: PAL aims to streamline assessment tasks, potentially reducing the workload for both students and staff (Schut et al., 2021).
- Encourages academic integrity: With low-stakes assessments, the pressure to cheat is minimised, fostering a more honest learning environment (Lodge et al., 2023).
- Enhances feedback: Aggregated assessment data provides meaningful, actionable feedback for students.
- Supports self-regulation: PAL encourages students to set and monitor their learning goals, fostering self-regulation.
PAL implementation stages (with UNSW examples)
The journey towards implementing PAL at UNSW involves several stages, each tailored to the specific needs of different programs and faculties.
-
Light-touch stage: Programs incorporate some principles of PAL but remain largely traditional. Assessment tasks are aligned with program-level learning outcomes, and there is some longitudinal tracking of student progress.
Example: Bachelor of Vision Science and Master of Clinical Optometry – This program incorporates PAL principles in its final two clinically immersive years, focusing on core clinical competencies. -
Intermediate stage: This stage includes all components of the light-touch stage, plus additional features like dashboards to track student progress and capstone assessments to evaluate longitudinal development.
Example: Bachelor of Science – This program uses a dashboard to track progress in six program learning outcomes, encouraging students to create learning plans and set goals. -
Comprehensive stage: At this stage, the entire program is centred around well-defined learning outcomes. Assessment tasks are varied and carefully chosen to sample students’ attainment of expected standards. High-stakes decisions are based on performance across all program learning outcomes.
Example: Bachelor of Commerce - – With a longitudinally designed curriculum, this program uses an interactive platform to guide students’ learning around core program learning outcomes and help them develop tailored career plans.
Challenges and enablers
Implementing PAL requires a cultural shift and significant investment in resources and technology. Key enablers include revisiting program learning outcomes, building capacity in assessment and feedback literacy, and ensuring agility in assessment policies and procedures (Baartman et al., 2022; Heeneman et al., 2021).
The shift towards PAL at UNSW represents a significant step in enhancing student learning and preparedness for the professional world. By focusing on continuous feedback and holistic assessment, PAL aims to create a more meaningful and work-relevant educational experience.
Acknowledgements
We extend our gratitude to the UNSW Programmatic Assessment Working Group (PAW) for their dedication and contributions to this initiative.
Resources
Read related blog article: Comparing apples and oranges: transforming UNSW assessment with Programmatic Assessment for Learning
References
Baartman, L., van Schilt-Mol, T., & van der Vleuten, C. (2022, October). Programmatic assessment design choices in nine programs in higher education. Frontiers in Education, 7, 931980. https://doi.org/10.3389/feduc.2022.931980
Heeneman, S., de Jong, L. H., Dawson, L. J., Wilkinson, T. J., Ryan, A., Tait, G. R., … van der Vleuten, C. P. M. (2021). Ottawa 2020 consensus statement for programmatic assessment – 1. Agreement on the principles. Medical Teacher, 43(10), 1139–1148. https://doi.org/10.1080/0142159X.2021.1957088
Iobst, W. & Holmboe, E. (2020). Programmatic assessment: the secret sauce of effective cbme CBME implementation. Journal of Graduate Medical Education, 12(4), 518-521. https://doi.org/10.4300/jgme-d-20-00702.1
Khanna, P., Roberts, C., Burgess, A., Lane, S., & Bleasel, J. (2023). Unpacking the impacts of programmatic
approach to assessment system in a medical programme using critical realist perspectives. Journal of
Critical Realism, 22(5), 840–858. https://doi.org/10.1080/14767430.2023.2279805
Li, S., Sherbino, J., & Chan, T. (2017). Mcmaster modular assessment program (mcmap) through the years:
residents' experience with an evolving feedback culture over a 3‐year period. Aem Education and Training,
1(1), 5-14. https://doi.org/10.1002/aet2.10009
Lodge, J. M., Howard, S., Bearman, M., Dawson, P., & Associates. (2023). Assessment reform for the age of Artificial artificial Intelligenceintelligence. Tertiary Education Quality and Standards Agency. Retrieved from Assessment reform for the age of artificial intelligence (teqsa.gov.au) https://www.teqsa.gov.au/sites/default/files/2023-09/assessment-reform-…
Ramsden, P. (2003). Learning to Teach in Higher Education. London: Routledge.
https://doi.org/10.4324/9780203507711
Roberts, C., Khanna, P., Bleasel, J., Lane, S., Burgess, A., Charles, K., & Rutzou, T. (2022). Student perspectives on programmatic assessment in a large medical programme: a A critical realist analysis. Medical Education, 56(9), 901-914. https://doi.org/10.1111/medu.14807
Samuel, A., King, B., Cervero, R.M., Durning, S.J. and Melton, J., 2023. Evaluating a competency-based blended health professions education program: a programmatic approach. Military Medicine, 188(Supplement_2), pp.69-74
Schut, S., Maggio, L. A., Heeneman, S., van Tartwijk, J., van der Vleuten, C., & Driessen, E. (2021). Where the rubber meets the road—An integrative review of programmatic assessment in health care professions
education. Perspectives on Medical Education, 10(1), 6-13. https://doi.org/10.1007/s40037-020-00625-w
Schuwirth, L., Valentine, N., & Dilena, P. (2017). An application of programmatic assessment for learning (PAL)
system for general practice training. GMS Journal for Medical Education, 34(5).
https://doi.org/10.3205/zma001149
van der Vleuten, C., Schuwirth, L., Driessen, E., Govaerts, M., & Heeneman, S. (2014). Twelve tips for programmatic assessment. Medical Teacher, 37(7), 641-646. https://doi.org/10.3109/0142159x.2014.973388