1 May 2024: Gen AI in assessment: institution and program tactics

A joint session with Assessment in Higher Education Network (UK) and Transforming Assessment featuring selected presentations previewing the Assessment in Higher Education Conference (UK) 2024.

Session chair: James Wood (University of Bangor, UK)

Featuring:

1) Providing AI guidance through COMPASS: Did students take the right direction? by Philip Denton (Liverpool John Moores University, UK)

Technological advancements in generative artificial intelligence (AI) have exacerbated longstanding assessment challenges within higher education (Lodge et al, 2023). Against this fluid backdrop, some fundamental things apply as time goes by. Firstly, the definition of cheating remains unchanged: students who deviate from the agreed permissions for an assessment should expect an academic misconduct investigation. Secondly, we continue to set authentic assessments framed within ‘meaningful contexts’ (Swaffield, 2011) and these will evolve as AI becomes ubiquitous within society. Thirdly, conveying unfamiliar concepts using metaphor is known to influence thinking in profound ways, for example, see Thibodeau and Boroditsky (2011).  

It was within this climate that the LJMU Faculty of Science developed its protocol for communicating AI permissions in assessment (COMPASS) in summer 2023. Contemporaneous guidance from Monash University (2023) advised assessors to select one of four distinct AI conditions; these were assigned to the four cardinal points within COMPASS; N, S, E and W. The most restrictive and permissive directions are N (No AI tools) and E (Every AI tool may be used), respectively. The other directions provide opportunities for nuance; S (Some AI tools) and W (Ways of using AI tools), both having associated caveats. The Monash AI acknowledgement template, that students use to disclose their AI usage, was adopted within COMPASS. The option of a null declaration was added, enabling students to explicitly confirm that they did not use AI.  

This presentation will report on the outcomes of a Spring 2024 survey of staff and students who were invited to share their perceptions of COMPASS. In advance of these results, we conclude that the appropriation of a compass as a metaphor was prescient: as we rethink assessment in response to AI, it is expected that there will be shift from assessing written products in isolation to an increased emphasis on the process of learning (Lodge et al, 2023). As assessment becomes more of a journey than a destination, a clear sense of direction becomes more important. Ultimately, however, it is anticipated that our four-point COMPASS guidance will become redundant. It is noteworthy, for example, that the latest iteration of the guidance from Monash University (2024) reduces its AI conditions from four to three. We suggest that the sector will eventually settle on two AI conditions: Complete prohibition, with associated invigilation, or the unfettered use of AI tools that reflects the reality of life in an AI era.

2) A course-wide approach to developing staff and students’ generative AI critical literacy by Susie Macfarlane, Megan Dennis and Emily Tomlinson (Deakin University, Australia)

Generative artificial intelligence (genAI) has developed sufficiently so that learners who use it may not be learning or achieving the learning outcomes and professional competencies they require to graduate and practice. However, research indicates that banning or detecting genAI is not effective, and that universities should nurture staff and students’ AI literacies and develop curricula that prepares graduates for future professional contexts (Liang et al., 2023). The current initiative was part of a strategic project and community of practice implemented in 2023 in the health faculty of a large Australian university to facilitate academics and course teams to discuss and critically engage with genAI, adapt assessment practices to respond to genAI, and provide students the opportunity to use genAI transparently and responsibly (Lodge et al., 2023). The challenge genAI poses to academic integrity is of particular concern to nursing educators responsible for ensuring their graduates meet professional nursing standards. An empowering and educative approach recognises the growing role of AI in healthcare, and scaffolds students’ development of critical AI literacies in preparation for nursing practice (Buchanan et al., 2021, Castonguay et al., 2023). This project therefore aimed to 1) develop nursing students’ capability to use genAI critically, ethically and responsibly across the programme, while 2) ensuring students learn and evidence course learning outcomes, through 3) engaging and empowering teaching teams to critically examine genAI and address students’ AI literacies in assessment and across the curriculum. Recognising the complexity of the challenge of genAI, the project team adopted a collaborative, capacity-building method, co-designing a course-wide approach and curriculum framework: building course team genAI literacies, facilitating course team discussions regarding concerns and implications for assessment, and co-designing strategies to embed genAI in the curriculum and assessment. This approach strategically develops students’ critical AI literacies at appropriate stages and with sufficient scaffolding (MacCallum et al., 2023). Additionally, a course-wide approach to AI responds to calls for systemic and programmatic assessment strategies that provide assurance that students are demonstrating required learning outcomes in the context of genAI (Lodge et al., 2023). Project outcomes include a set of principles for the critical and responsible use of genAI in higher education, a manual of resources and strategies guiding the course-wide implementation of genAI, and genAI-specific modules and assessment. The pilot is currently being extended to four additional health courses. This presentation will discuss project findings, share resources, and invite attendees to share their experiences and initiatives.

Further resources:

Session Recording