4 September 2024: Gen AI in assessment: authenticity and feedback
A joint session with Assessment in Higher Education Network (UK) and Transforming Assessment featuring selected presentations from the Assessment in Higher Education Conference (UK) 2024.
Session chair: Fabio Arico (University of East Anglia, UK)
Featured:
1) Encouraging self-feedback practices and engagement with feedback at the program level using e-portfolio by Mathilde Roger (University of Durham, UK).
Despite instructors’ efforts to improve feedback practice in higher education, the National Student Survey shows low satisfaction and engagement in this area across the UK. Students often ignore or misuse feedback for their future assessments.
Moving away from “assessment of learning” Durham University is now fully embracing the “assessment for learning (AfL)” framework. In AfL, feedback practice has become a powerful tool to enhance student learning as formal and informal feedback opportunities allow students to develop as self-assessors and develop self-regulated learning (Sambell et al. 2013). Although feedback delivery can still be improved, AfL can only succeed if students respond to feedback (Winstone & Boud, 2019). This means feedback should be used to support learning as a whole and not just for one task.This project seeks to support students to become active receivers of feedback by using a digital portfolio, called a Student Feedback Journal, to record and reflect on the feedback they receive during their degree. Nowadays, with the use of various digital platforms for assessments, feedback has become compartmentalised with a specific assessment or specific module. Using an e-portfolio will enable students to collate feedback from multiple assignments in one place and track their progress more efficiently.
This e-portfolio will also be used to encourage reflection on feedback opportunities which are not instructor-led, such as peer feedback and self feedback to promote feedback generation by comparison. Inspired by David Nicol’s work, we will explicitly encourage students to compare their work with AI-generated exemplars (Nicol, 2021). This will first enhance students' internal feedback capabilities and highlight the limitations of AI-generated answers.
The use of e-portfolios to record feedback has already been successfully introduced in the UK (Winstone and Nash, 2017), and studies have shown that by reflecting on their feedback during 1st year of undergraduate studies, students could improve feedback literacy (Coppens et al., 2023). However, the impact on student learning and NSS scores must be investigated. This project will also help us to start to assess the impact of generative AI in higher education and how it can be used to enhance students’ learning and promote critical thinking.
In this presentation, we will demonstrate our journal and share the initial feedback from a pilot group of student users. We welcome feedback from other practitioners interested in this topic as we prepare for a full rollout in the 2024 academic year.
2) Reinvesting in Authentic Assessments: A Response to Generative AI by Kathleen Burrows (University of Bristol, UK)
In response to the challenges posed by academic misconduct, particularly through essay mills and the subsequent emergence of generative AI, the Quality Assurance Agency for Higher Education (QAA) has consistently advocated for authentic assessment as a strategy to reengage students in the learning process (QAA, 2023). This shift in focus from misconduct to engagement seems essential. It recentres humans and their relationships to real-world tasks and allows assessment to become a ‘true test of intellectual ability’ rather than a task that can be easily outsourced (Wiggins, 1989, p. 703). Authenticity can be established through different elements, like collaboration or fidelity (Ashford-Rowe et al., 2014; Villarroel et al. 2018). However, these individual elements are not a panacea for academic misconduct (Ellis et al., 2020). Engagement is crucial (Jessop, 2023).
This paper briefly presents the trends in AI misuse in the 22/23 academic year in an International Foundations Programme and describes the assessment changes made in light of generative AI. Notable changes include more emphasis on process writing, more authentic audiences for assignments, and more integration across units. The preliminary evaluation of two changes will be presented. The first change integrated an English for Academic Purposes (EAP) writing assessment task with students’ work in other units, thereby creating a more authentic audience and purpose. The second change tied the EAP problem-solution writing task to local data and gave students the opportunity to explore the local city while researching and writing the assessment. This presentation will detail students’ engagement with these new elements and discuss their possible effectiveness. These changes call on teachers and students to engage more with each other and their community. The early findings suggest that engagement with these elements varied, but they might deter misconduct and increase motivation. This reinvestment in authenticity and human relationships provides one possible way forward for education in the age of AI.
Further resources:
- Slide Set: TA_webinar_4_sep_2024_slides.pdf (PDF)
- Chat Log (edited for typos) including shared links: TA_webinar_4_sep_2024_chat_log.txt
- University profile: Mathilde Roger profile at Durham and presentation related link: e-portfolio walkthrough
- University profile: Katie (Kathleen) Burrows profile at Bristol
Session Recording
- View: Session recording video on Blackboard Collaborate (slide stream, video, audio and chat).
- View: Session recording video on Youtube (slide stream and audio).