Review your assessments

You are advised to review your assessments for AI vulnerability. One approach is to use generative AI tools themselves to explore the outputs produced and how well they mimic a student’s performance. To do this: 

  • Paste the assignment question into the generative AI tool. Read and evaluate the quality of output it produces.  
  • Take it a step further: prompt generative AI to fine tune the assessment, for example asking for a more critical response, or writing in a more academic or reflective tone, depending on the task assigned to students.  
  • Then have a go at adding further information: the background or context to the assignment, marking criteria, course notes, to see if this changes the output.  

A different, but equally good approach to reviewing your assessments is to use a risk checklist of different types of assessment to assess how vulnerable the type of assessment you are using is to students using generative AI. Grove 2024 (PDF, 970kB) outlines the AI risks of a number of different assessment types (Table 1 in Grove’s paper). Given these risks, you may be tempted to retreat to closed book exams as a low-risk option, but these are not always effective for assessing a wide range of skills and knowledge. You are therefore advised to refine your assessments in line with the guidance below about high-risk assessments, and then, over time, take a programme-level approach to redesigning assessments in line with Bristol’s assessment strategy, taking into account the opportunities and risks of generative AI.  

What should I do if my assessments are high risk? 

Here are suggestions to mitigate the risks in the short-term, without changing the format of the assessment: 

  • Make clear the purpose of this assessment to students, the level of permitted use of generative AI, and discuss why this AI level has been chosen. 
  • Consider how formative assessment can build a stronger sense of students’ learning through incorporating more contextual, local, contemporary or personal reflections.  
  • Use in-class sessions to teach students academic writing, mathematical or coding skills, and engage them in peer review of artefacts they produce in class – build their confidence in their own voice, mastery of concepts, original thinking, and evaluative judgement.  
  • Use formative assessment tasks for students to experiment with and critique limited use of generative AI, and to reflect on what generative AI can do well, and its limitations.   
  • Balance the authorised use of AI in formative tasks with clarity about why it may not be appropriate to use the same tools and approaches in summative tasks.  

Go back to the 'Using AI and Assessment' homepage. 

Edit this page