Skip to main content


This guide has presented a range of question styles and their variants, and pointed out some of the appropriate uses of each type. This raised issues such as preventing students from guessing, and the difference between the student recognising the correct answer and constructing it for him/herself. It pointed out that text entry questions may be a solution to this but that they need to be handled with care and may not be suitable for summative testing.

It has introduced Bloom's taxonomy of learning objectives, and suggested that this may be useful in analysing questions to identify what they are testing, and to write

questions that test more than simple recall.It has presented two question formats that can lend themselves to creating more cognitively demanding questions (Assertion/Reason and Data Analysis).

The later sections presented some guidelines for writing effective questions, and illustrated some of the potential pitfalls. Finally an approach was suggested for redesigning existing assessments for computer delivery, and two University of Bristol teaching staff described their own experience of doing this.

For further help and information:


Learning Technology Support Centre, University of Bristol

Visit the CAA Centre. This was a national project now ended but the site contains many useful resources:


Bull, J. and McKenna, C. (2004) Blueprint for Computer Assisted Assessment. RoutledgeFalmer, London

Williams, B. (2006) Assertion-reason multiple-choice testing as a tool for deep learning: a qualitative analysis. Assessment and Evaluation in Higher Education Vol 31, No. 3. pp.287-301

Wood, E. (2003) What are Extended Matching Sets Questions? Bioscience Education Journal Vol. 1 Issue 1. Higher Education Academy. Available online at: [accessed Jan 2007]