22. Marking, moderation and anonymity

22.1. Programmes will have in place and operate marking and moderation processes that ensure the reliability, consistency, and accuracy of marking , in line with the expectations set out in this section. Such processes may be organised at a school or faculty level.

22.2. The marking and moderation processes should be made available to students.

Marking practices

22.3. Single marking is where student work is marked by one individual based on a marking scheme. Moderation must take place on individual assessments with single marking subject to exemptions set out in (22.5) below. 

22.4. Double (or ‘second’) blind marking is the process by which an assessment is marked independently by two markers, who then agree a final mark (or marks). Neither marker is aware of the other’s assessment decision in formulating their own mark. Moderation is not required for work that is double marked as the double marking effectively takes the place of moderation. Double marking will normally take place for:

  1. Dissertations and end of programme projects or equivalent (in which a dissertation supervisor may only be permitted to be an internal examiner as part of a marking team);
  2. Where there are particular difficulties in applying moderation given the nature of the assessment (e.g. a live practical assessment that is not recorded);
  3. Work marked by non-academic staff (depending on the experience of the markers) or inexperienced markers;
  4. Where this is required by a professional, statutory or regulatory body.

22.5. The practice of one marker seeing the marking of another marker (non-blind) is deemed to be a form of moderation. 

22.6. Where there is more than one marker for a particular assessment task, schools should take steps to ensure consistency of marking. Programme specific assessment criteria must be precise enough to ensure consistency of marking across candidates and markers, compatible with a proper exercise of academic judgement on the part of individual markers. 

22.7. Markers are encouraged to use pro forma in order to show how they have arrived at their decision. Comments provided on pro forma should help candidates, internal markers and moderators and external examiners to understand why a particular mark has been awarded.  Schools should agree, in advance of the assessment, whether internal moderators have access to the pro forma / mark sheets completed by the first marker before or after they mark a candidate’s work.

22.8. Where a student provides an answer to more questions than is required by the examination paper, the marker should mark all the answers and use the marks from the highest scoring answers to calculate the assessment mark. 

22.9. The School Education Director or delegate is responsible for overseeing the allocation of marking, and the forms of marking used in programmes within their School. 

Benchmarking

22.10. Benchmarking is a process to promote consistent standards among multiple markers of a specific assessment. It should be used in appropriate cases prior to marking and moderation. 

22.11. In large units it is common to have multiple markers of an assessment. In such cases, the possibility arises of misalignment across markers even where markers have been individually consistent. To encourage collective consistency and reduce the need for re-marking of scripts, benchmarking should be used as an important part of the overall quality assurance process. 

22.12. A typical benchmarking exercise could involve all markers individually marking the same small selection of randomly chosen scripts (e.g. 5 scripts) and then agreeing how marks should be allocated against the marking criteria to inform marking of the remaining scripts. The number of scripts selected for such benchmarking will depend on the nature of the assessment. For example, where optional questions exist, it may be necessary to select a higher number of scripts than usual to ensure all questions are discussed in the benchmarking exercise.  

22.13. Benchmarking should take place before marking so should be arranged as soon as possible after an assessment has taken place. It is good practice to organise benchmarking meetings as part of the marking allocation within a school.  

Calibration

22.14. Calibration is the process to promote consistency of standards between institutions, units or academic years. 

22.15. Some assessment types call for academics’ individual expert judgements. Internal calibration helps markers across and within programmes to develop shared understanding of academic judgement across different assessments, units or academic years.  The purpose of calibration is to enhance and share good academic practice amongst markers rather than ensuring consistent standards for a particular cohort of students.

22.16. Internal calibration exercises can take many forms but often involve a group of academics reviewing a small sample of anonymous student assessments before discussing the decision-making behind hypothetical marks and feedback. Unlike benchmarking, internal calibration exercises are not intended to agree a ‘correct’ mark or prepare teams for marking particular assessments. Nor are they best used to identify deviations from norms to be corrected. Rather, periodic internal calibration exercises help academics develop their individual judgement through knowledge of how other experts might approach a broadly similar scenario. In that sense, the use of internal calibration recognises that robust individual academic judgement arises from participation in a community of expert assessors who periodically reflect on their decision-making. Likewise good practice in feedback is encouraged and facilitated by reflecting on the marking of student work by other experts. 

22.17. Faculties / schools should have processes in place that allow programme teams to develop a shared understanding of marking criteria and exercise their individual academic judgment with knowledge of how others might exercise that judgment in broadly similar scenarios. 

Internal moderation

22.18. Summative assessment will normally be moderated. Exceptions are:

  • where the assessment contributes 10% or less to the unit mark
  • objective tests, such as multiple-choice questions.

22.19. The sample size for moderation should be adequate to provide assurance that the work has been properly marked across a range of student performance in the assessment for each marker. The following procedure is recommended to arrive at a representative sample:

  1. sufficient standard ranges should be established across the marking scale from which the selection is to be made (for example the ranges could consist of fails, third class, 2:1, 2:2, first or the descriptor categories on the 0-20 marking scale);
  2. a sliding scale corresponding to the number of assessments available for moderation should be employed; as a guide, a minimum of eight or 10% of the available assessments, whichever is greater, should be included in the sample. The sliding scale should then be adjusted according to:
    1. the number of scripts available, so that the sampled proportion reduces as the number of available scripts rises; and
    2. the number of first markers for an assessment or component part of an assessment; the higher the number of first markers, the more assessments are moderated (to ensure adequate moderation across all markers).
  3. Where the number of submitted pieces of assessment for the unit is seven or less then all the assessments should be subject to internal moderation.  

The internal moderation of assessments that do not generate a numerical grade (i.e. pass/fail assessments) should focus upon those at the pass/fail border.

The marks of assessments that significantly contribute to determining progression within a programme or the award and classification of a qualification (e.g. a dissertation or project) should be carefully reviewed through the moderation process, if they are not double-marked.

22.20. The responsibilities for conducting internal moderation are:

  • Moderation is undertaken by an individual or team of academic staff within the subject, as allocated by the designated school representative (i.e. School Education Director or Exams Officer).
  • The Unit Director is responsible for ensuring that moderation takes places in their unit in accordance with these expectations.
  • The Programme Director is responsible for having an overview of moderation across the programme.
  • the final decision on marks rests with the exam boards, taking account of the view of the external examiner(s). 

22.21. Moderation should take place after the assessment has been marked and in advance of submission to the exam board, with reference to the University’s policy on providing feedback to students on their work. Where necessary, priority should be given to the timely release of feedback over the completion of the moderation process. In such cases, students should be informed of the status of the mark that has been released.

22.22. The role of the moderator is to form a view of the overall marking, not apply corrective marking to individual assessments. The moderator should produce a report, which should instigate a dialogue between the marker and moderator; the conclusions of which should be formally captured as part of an audit trail. The purpose of the audit trail is to provide the relevant exam boards, including the external examiner with a means to determine whether the marks are fairly awarded and are consistent with relevant academic standards and as evidence in the event of an appeal. 

22.23. Moderators should review the marking of the individual marker/s against the relevant marking criteria within the sample and all the marks awarded to identify whether the marks awarded appropriately reflect the standard of work and whether there are any inconsistencies within the marking. A separate process should be in place to check that all questions in an assessment has been marked and that the marks are totalled correctly.

22.24. Specific outcomes arising from the moderation process are:

  • Moderator confirms marks.
  • An entire set of marks is adjusted in relation to the marking criteria and the mark distribution.
  • A sub-set of marks is adjusted to rectify a perceived inconsistency within the marks profile and/or between markers.
  • The whole or sub-set of assessments are re-marked because the inconsistencies cannot be rectified in a simple manner.

‘Mark adjustment’, as an outcome of moderation, is a legitimate and intended means of ensuring that marks are robust and fair. An adjustment may apply to an entire set of assessments or an identified sub-set. Adjustments should not be made to individual marks in isolation.

22.25. In cases where a moderator and marker cannot agree on a course of action, the batch of work should be referred to a second internal moderator (as identified by the School Education Director or delegate) for adjudication. 

22.26. The relevant school exam board should be assured that moderation has occurred and action has been taken to assure the quality and standards of the marks presented to it. 

22.27. Evidence of moderation should be made available to the external examiner for review, which may consist of samples of moderated assessment, a distribution of unit marks and the formal record of dialogue between markers and moderators. Internal examiners should consider and respond to any issues raised by the external examiner prior to the exam board wherever possible.

22.28. The School should review the operation of its policy on internal moderation for its programmes on an annual basis. The University Quality Team will investigate moderation practices and their implementation where there is cause for concern (e.g. if it is raised by an external examiner in their report).

22.29. Where coursework is assessed summatively, schools should have a system in place to ensure students’ work is available for moderation at a later date, by a means that ensures that the marked work is identical to that originally submitted.

22.30. Work assessed for summative purposes should be capable of being independently moderated and made available in case it needs to be moderated by the external examiner(s). It is recognised that second marking/moderation may present difficulties in some forms of summative assessment such as a class presentation. In these cases, evidence of how the assessment mark was reached should be preserved for moderation. 

Scaling of marks

22.31. Scaling is not normally permitted, except in the following two circumstances:

  1. Where the raw scores for the whole cohort are converted onto an appropriately distributed marking scale as part of the planned design of the assessment. The rationale and mechanism for scaling should be recorded in the unit specification and in the minutes of the relevant exam board.
  2. Where the marks of a cohort of students are moderated post hoc due to an unintended distribution of marks. When an assessment or a question within an assessment has not performed as intended, scaling may be employed (in this instance the methodology will not have been planned beforehand). This should be an exceptional event. The rationale and mechanism for intended scaling should be recorded in the minutes of the relevant exam board.

22.32. Before scaling is used, its use and the method that is intended to be employed must be agreed with the relevant Faculty Associate Pro Vice-Chancellor (Education and Students) or delegate, prior to application, and the relevant external examiners consulted before being reported to the exam board.

22.33. The use of scaling must also be made transparent to students: in the case of (a), students must be informed of the way in which the raw scores are converted onto the marking scale prior to the assessment; whilst in the case of (b) students must be informed of the process after the assessment.

Anonymity in assessment 

22.34. ‘Anonymity’ is defined as the use of an identifier, which cannot be related to a student’s name without reference to a central register or other mechanism, in the assessment process. An identifier is adopted in order to: avoid unconscious and conscious bias in marking, respect student confidentiality, and ensure fairness when progression and award decisions are made; however, it does not necessarily mean that it is impossible for a member of staff to uncover the identity of a particular student. 

22.35. Members of staff must respect anonymity where it is employed and not identify, or seek to identify, students unless it is a requirement of their role or there is a clear benefit to the student in doing so e.g. the provision of specific feedback to the student, the correct treatment of exceptional circumstances. 

22.36. Where students might be identifiable e.g. because they are part of a very small cohort or they have an unusual pattern of study, anonymity must be respected as for any other student in line with this section. 

22.37. Schools are responsible for informing students of how they should identify their work. 

22.38. It is the responsibility of students to employ the anonymity mechanisms provided to them.  

The marking of credit-bearing ‘summative’ assessment of learning 

22.39. Summative assessment should be anonymous when it is marked where that is possible and practicable, and consistent with the assessment and its objectives. Where anonymity may or may not be expected at the first marker and moderation stages is set out below, by assessment type.  

Assessments where anonymity is expected at both the first marking and moderation stages:  

  • exams   
  • timed Assessments   
  • summative coursework not included below  

Assessments where anonymity may not be expected at the first marker stage:  

  • all formative coursework (where a mark does not contribute to the unit mark and passing is not required for credit)   
  • summative assessment where formative feedback is provided on an early draft as part of the design of the assessment 
  • final year and PGT projects / dissertations   
  • presentations   
  • group work (especially where ‘equity-share’/student contribution marking is a component)   
  • bespoke coursework – where all students formally agree the specifics of their coursework with a tutor, such that they are necessarily identifiable.   
  • practical in-person assessmente.g.in labs, fieldwork tasks, medicalpracticals, oral exams    
  • summative assessment that accounts for a small part of the unit mark and where the provision of individualised feedback for learning is an inherent part of the design of the assessment 

22.40. Anonymity is a general expectation when the marking of student work is moderated. 

22.41. The marks awarded for summative assessments should be released individually to students.   

22.42. Specific moderation techniques must be used for non-anonymous summative assessments e.g. multiple markers.   

The marking of non-credit-bearing `formative’ assessment for learning 

22.43. When designing formative assessment, priority should be given to the educational benefits of the assessment rather than anonymity, for example it should not interfere with the provision of feedback to students. 

22.44. While anonymity is not required for formative assessment, it may still be preserved where it is consistent with the assessment and its objectives.