Work-integrated learning promotes learning through authentic engagement in a natural workplace setting. Students develop their knowledge and skills through lived experience in a professional, discipline-specific context of practice (Fry, Ketteridge & Marshall, 2005).
The term "work-integrated learning" (WIL) refers to "a range of approaches and strategies that integrate theory with the practice of work within a purposefully designed curriculum" (Patrick et al., 2009). Particular disciplines and professional settings have their own traditional names for WIL. Terms in common use include:
- field work
- clinical placement
- professional practicum
- cooperative education
- service learning
- capstone course
- industry project.
Assessing WIL consolidates its learning benefits in any program, not just those in which it is core to meeting professional accreditation requirements.
[Transcripts of all videos on this page]
How Field Based Learning works - A/Prof Elizabeth Fernandez
When to use
Integrative learning is best used when:
- students require placement in authentic professional contexts as learning environments
- you want students to integrate theoretical knowledge and practice, to connect university or disciplinary learning with workplace application
- you want to ensure alignment between learning objectives, workplace activities and assessment
- there is a need to administer aspects of the curriculum that indirectly support a student's learning in the workplace
- you aim to produce work-ready graduates, whose skills are a good match for what employers need in the sector.
- gain extended experience in developing 'soft' graduate capabilities such as communication, team working, leadership and career development
- develop awareness and understanding of workplace culture and the differing expectations of them
- obtain practical assistance and skills in developing career attributes
- put their acquired knowledge into practice.
- see students putting the knowledge they are acquiring in class in practice
- integrate student learning experiences into their curriculum and course development
- network and build links with a wider range of employers
- develop innovative and more applicable WIL opportunities by collaborating with past employers of placement students
- collaborate with employers who have experience in assessing graduate employability.
Assessment using a WIL curriculum raises many challenges (see Yorke, 2011), including the following:
- WIL has been characterised as inherently variable, unpredictable, sometimes brief, high-risk learning events that are not replicable (Cooper, Orrell & Bowden, 2010). Because of this complexity, WIL assessment characteristically depends more on overall judgment than on fine-grained measurement.
- In the distinctive cultures of academia and the workplace, WIL may create tensions that are difficult to resolve in assessment. For example, workplace supervisors may not see the value of criterion-based grading of students, preferring a simple pass/fail assessment, particularly if there is only one student in the workplace setting.
- Workplace supervisors may struggle with their dual role as learning enablers, guiding and mentoring students, and as assessors and guardians of professional standards.
- Supervisors may be inexperienced in undertaking formalised assessment.
- Assessment may be guided by unstated assumptions and personal dispositions, unrelated to the intended learning outcomes of the course or program, and inaccessible to scrutiny by others.
- Interpretations of student performance may hinge on the initial establishment of relationships between the student and the people in the workplace setting, including the assessor.
- The authentic setting of WIL often makes it difficult or undesirable to record that students' performance of specific activities was observed. It can be hard to prosecute grade appeals in such cases.
- It can be challenging to provide equivalent assessments for students in special circumstances, such as those with disabilities.
If you include WIL activities in a course or program, explicitly recognise them in your assessment plan. Focus your WIL assessment on the distinctive learning that WIL promotes. Typically assessment will centre on observations of the student's on-the-job work performance, records or outputs of work they have undertaken, or reflective analysis by them on the experience of their learning in the work setting.
WIL experiences can be extensive or relatively constrained. They can be constituted as a program strand or whole course of study, as a module within a course or as a discrete project or task. Whatever the scale of a WIL activity, consider the following factors when planning:
- Do the intended learning outcomes provide the framework for the assessment plan? Take care within assessment to focus on learning that integrates knowledge and skill from multiple areas of learning with practice in the authentic WIL setting. Learning outcomes commonly stated for WIL include:
- extended knowledge and understanding of discipline-specific concepts and theories
- discipline-specific technical and practical skills and competencies
- professional skills within an area of professional practice
- the capacity to apply theory to practice in solving authentic problems in real-world contexts of action
- generic skills and capabilities such as leadership
- personal attributes such as resilience.
- When will assessment occur? You might decide to assess:
- prior to the WIL experience, to check that students are well prepared in terms of safety and duty of care
- during the WIL experience, to provide formative feedback on progress and to enable students to adjust their behaviours and strategies, and to assess students' performance in situ
- after the WIL experience, to encourage their retrospective reflections and self-evaluation, and to grade student performance.
- Who should be involved as assessors?
- Students assessing themselves?
- Students assessing their peers?
- Workplace mentors or supervisors?
- External assessors from industry for accreditation?
- University staff?
If you involve multiple people in the grading process, determine what weight you will give each input. Be very clear about external assessors' roles and responsibilities, both to them and to students (Hays & Bashford, 2009). Regardless of who is involved, ultimately the university coordinator of the WIL component is responsibility for summative assessment; that person should ensure that appropriate assessment moderation processes are in place.
- How will you balance formative and summative assessment? If it's not appropriate for external people to undertake assessment towards the summative grade, explore alternative ways of valuing judgments about the quality of students' performance and providing feedback.
For example, student groups may collaborate on a project within a WIL setting and have their outputs assessed formally for a grade by the course convenor and host supervisor. The group outputs may also be judged by a panel of external experts, prizes awarded for outstanding achievements and a showcase event held where students present their work.
- Where the assessment plan includes observation of performance by students:
- ensure that you pass on to students any formative feedback on ad-hoc observations of their performance before you conduct a summative assessment
- incorporate unpredictable circumstances, so that students can be assessed on their spontaneous performance, particularly in the later years of a program.
- Assessment criteria should:
- be aligned with learning outcomes
- be sufficiently flexible to account for the variability of the authentic setting in which the student's work is demonstrated
- underpin the development and application of assessment rubrics and the provision of feedback to students
- incorporate assessing the student's interactions with others in the WIL setting.
- Provide formal in-class learning opportunities before students begin working in the authentic WIL setting, to minimise risks and maximise students' capacity to benefit from the experience and to present their learning achievements effectively.
- Determine the appropriate range and mixture of assessment tasks and their relative weightings in the grade. Given the diversity of potential learning outcomes from WIL, assessment tasks can vary greatly, as can the object of assessments. Some outputs are:
- observations by the student
- on-the-job performance tasks
- industry-based projects
- engagement in simulated WIL activities
- certification exams
- case studies (before, during, or after the placement)
- critical incident analysis
- reflective journals
- program comprehensive exams
- Objective Structured Clinical Examinations (OSCE).
The vignettes recorded by the Australian Community Education Network (Patrick et al., 2009) show diversity of assessment plans in use for WIL initiatives across many disciplines.
- Ensure that any potential legal or ethical issues associated with assessing WIL activities are identified early (preferably before students are involved), and that responses to these are clarified and communicated. For example:
- If group work with participants in the authentic context is involved in the production and presentation of assessment outputs, ensure that it is possible to differentiate the contribution of the student.
- Check whether there are likely to be any privacy or commercial-in-confidence (copyright) restrictions on material generated or collected in the workplace.
- Ensure that students and host organisations understand the implications of non-disclosure of information that may affect the processes or outcomes of the WIL activities, and in turn the assessment of those activities.
Assess work integrated learning
Given the diversity of possible assessment types and tasks, it can be useful to think of the selection choices in terms of the "object" or purpose of assessment. Usually three objects are desired:
- observations of the student's on-the-job performance
- observations of the student's performance in simulations of real situations
- reflections by the student on their WIL experience.
The following outputs can be useful in addressing these three aspects of WIL assessment.
Video recordings of performance
Performance-based assessment can be very demanding for all concerned. It can create a high level of anxiety for students and requires excellent preparation by experienced assessors if it is to take place in real time, in situ, with real participants.
Video recordings have been found to be useful in the professional practicum for teacher education programs (Wu & Kao, 2008). When students can see themselves in action, their self-assessments are better informed and the evidence for their assessors' feedback is clear. Video recordings can also generate meaningful conversation among students as peer assessors.
Limit the use of video recording to capturing critical performance moments in context. Additionally, ensure that you manage the process ethically - this involves obtaining consent from all participants.
Student as tour guide
Coe and Smyth (2010) describe a geography program in which, for purposes of assessment:
- The student acted as a tour guide (thus, the expert) to a particular area under study.
- After the tour, the student self-assessed their performance, including how effectively they managed the group of participants.
- The tour participants also graded the guide and their own experience of the activity.
Students can find such an assessment activity quite confronting, as it occurs outside the safety and privacy of the usual assessment environment, and requires them to demonstrate their holistic understanding of the authentic context, as well as how well they lead, manage and communicate with diverse audiences. To prepare and support them, students were given opportunities for skills development and practice before they were observed as tour guides.
Assessor ratings of interactions with clients
In many disciplines, performance-based assessment centres on interactions between the student and an authentic client within the WIL setting. For such assessments, carefully articulate the assessment criteria and use analytical tools or rubrics to help the observer record their interpretations and ratings. The Mini CEX (mini-clinical evaluation), for example, is a tool widely used in medicine to analyse multiple observations in rating directly observed resident–patient encounters (Cook et al., 2010).
Client feedback on interaction with student
Introduce an additional dimension to performance assessment by gathering feedback from the client about their experience of the interaction. For example, patient feedback surveys or interviews about their interactions with medical students (as reported by Dogra et al., 2009 and Lyons et al., 2009) can be used in a range of professional WIL settings.
Host supervisor reports
At the conclusion of the practicum placement, you can ask supervisors to complete a competency based assessment report, which can contribute to the student's grade. But host supervisors may not always feel that it is their role to assess students; take care to minimise the workload you place on host supervisors by requesting such reports.
Assessment of simulated performance
Where you cannot undertake performance assessments in situ and with real people, you might be able to plan simulated assessments. This gives you more control over the situation, reduces risk and increases fairness because the conditions are common for all students—but you do sacrifice some authenticity of assessment.
Simulations aim to mirror authentic contexts through, for example, the use of non-human models or patients, computer-based simulations or virtual worlds such as Second Life—or they can be one step removed from authentic settings, for example using case studies based on authentic data.
Students analyse their interactions with clients
As an extension of the original performance-based assessment, you can require students to analyse audio or video recordings of their previous interactions with clients. Medina (2010) describes a process where the student prepares verbatim transcripts of the encounter, then analyses them, firstly cognitively, then affectively, with the student reflecting on their own feelings and their perceptions of the client's feelings. This self-assessment addresses whether the objectives of the encounter were met, unmet, modified or changed.
Then the supervisor comments on both the performance and the self-assessment.
This type of assessment task helps students reflect on their communication skills, professionalism and ability to manage a variety of situations, including engagement with patients, clients or customers from different cultural backgrounds.
Critical incident analysis
Following the WIL experience, students analyse critical incidents that occurred, either as a written assignment or in an interview with the assessor.
You can use case studies relating to student placements to enable students to review their placement experience with the benefit of hindsight and from the less stressful and more objective position of being removed from the work environment (for example, Dacre et al., 2006, and Ramaekers et al., 2010).
Use the outputs from projects undertaken in the WIL setting for assessment. Students can present them in a wide range of ways, including, for example, posters (McNamara et al., 2009), presentations at trade fairs organised with industry partners, and so on.
Diaries, journals and portfolios
Students commonly compile reflective diaries and journals as part of WIL. No matter what the field or WIL context, make it clear students that they should make connections between their own experiences and the theoretical perspectives and research findings published in the literature, as Dummer et al. (2008, 472) have advocated in the context of geography field diaries.
Reflective diaries and journals often capture evidence of learning that would be inaccessible by other means.
Portfolios, whether on paper or electronic, allow students to collect and present a coherent account of their learning through WIL experiences, and effectively stimulate integrative learning and autonomy and self-management.
Students can also collect learning evidence into online journals and blogs (see Larkin & Beatson, 2010). You can require students to make regular journal entries or blog posts during WIL assessment, to help keep them on track documenting what they have observed and learnt each day.
- Australian Collaborative Education Network (ACEN) website
- Griffith University (2012), Clinical Education Resource Manual: School of Physiotherapy and Exercise Science
- UNSW Faculty of Medicine (2011), Exercise Physiology Program, Information Manual for Supervisors of Clinical Practicum
- Winchester-Seeto, T., Mackaway, J., Harvey, M. and Coulson, D. (2010), Assessment in learning through participation—Learning and Teaching Centre, Macquarie University.
- World Association for Cooperative Education website
Adams, S.K. and Wolf, K. (2008). Strengthening the preparation of early childhood teacher candidates through performance-based assessments. Journal of Early Childhood Teacher Education 29, 6–29.
Brown, N. (2008). Assessment in the Professional Experience Context. Journal of University Teaching and Learning Practice 5(1).
Coe, N.M. and Smyth, F.M. (2010). Students as tour guides: innovation in fieldwork assessment. Journal of Geography in Higher Education 34(1), 125–139.
Cook, D.A., Beckman, T.J., Mandrekar, J.N. and Pankratz, V.S. (2010). Internal structure of mini-CEX scores for internal medicine residents: factor analysis and generalizability. Advances in Health Science Education 15, 633–645.
Cooper L., Orrell, J. and M. Bowden, (2010). Work Integrated Learning: A guide to effective practice. London: Routledge.
Dacre, J., Gaffan, J., Dunkley, L. and Sturrock, J. (2006). A new finals clinical examination. Clinical Teacher 3(1), 29–33.
Dogra, N., Reitmanova, S. and Carter-Pokras, O. (2009). Twelve tips for teaching diversity and embedding it in the medical curriculum. Medical Teacher 31, 990–993.
Dummer, T.J., Cook, I.G., Parker, S.L., Barrett, G.A. and Hull, A.P. (2008). Promoting and assessing "deep learning" in Geography fieldwork: an evaluation of reflective field diaries. Journal of Geography in Higher Education 32(3), 59–479.
Fry, H., Ketteridge, S. and Marshall, S. (2005). A handbook for teaching and learning in higher education: Enhancing academic practice (2nd edn). London: Routledge.
Hays, R. and Bashford, L., (2009). Being an external examiner: what you need to know and do. The Clinical Teacher 6, 160–163.
Larkin, I. and Beatson, A. (2010). Developing reflective practitioners online: the business of blogs in work integrated learning.
Lee, G.C. and Wu, Cheng-Chih, (2006). Enhancing the teaching experience of pre-service teachers through the use of videos in web-based computer-mediated communication (CMC). Innovations in Education and Teaching International 43(4), 369.
Lyons, O., Willcock, H. and Rees, J. (2009). Patient feedback for medical students. The Clinical Teacher 6, 254–258.
McNamara, J., Larkin, I.K. and Beatson, A.T. (2009). Poster presentations: authentic assessment of work integrated learning. In Australian Technology Network Assessment Conference 2009, RMIT University, Melbourne.
Medina, C.K. (2010). The Need and Use of Process Recording in Policy Practice: A Learning and Assessment Tool for Macro Practice. Journal of Teaching in Social Work 30(1), 29–45.
Patrick, C., Peach, D., Pocknee, C., Webb, F., Fletcher, M. and Pretto, G. (2009). The WIL (Work Integrated Learning) Report: A national scoping study. Australian Learning and Teaching Council.
Ramaekers, S., Kremer, W., Pilot, A., van Beukelen, P. and van Keulen, H. (2010). Assessment of competence in clinical reasoning and decision-making under uncertainty: the script concordance test method. Assessment and Evaluation in Higher Education 35(6), 661–673.
Wu, Cheng-Chih and Kao, Hue-Ching. (2008). Streaming Videos in Peer Assessment to Support Training Pre-service teachers. Educational Technology and Society 11(1), 45–55.
Yorke, M. (2011). Work-engaged learning: towards a paradigm shift in assessment. Quality in Higher Education 17(1), 117–130.
The contributions of staff who engaged with the preparation of this topic are gratefully acknowledged, in particular Sally Mildon and Fiona Naumann from the Exercise Physiology program in the School of Medical Sciences.