Hide/Show Apps

The effect of standardisation sessions conducted before english language writing exams on inter-rater and intra-rater reliability

Karadenizli-Çilingir, Mahmure Nur
Reliability refers to the consistency of a measure. There are two types of reliability: (a) inter-rater reliability which means consistency between two or more markers; and (b) intra-rater reliability which refers to consistency between the assessments of the same work by the same rater at different times. There are many factors that affect rater reliability. Rater bias, educational background, rater language background and the amount of experience in teaching and/or assessment are some of them. The literature has a number of studies concerning how to eliminate, or reduce, the factors that decrease rater reliability. For instance, it discusses that using a rubric or multiple/double marking protocol could improve rater reliability. However, many in the field believed all these attempts could not yield the desired level of rater reliability per se so they suggested these practices be assisted with standardisation sessions, or in other words rater training sessions. These sessions are believed to reconcile teachers’ judgements as much as possible by providing them with clear understanding of the rubric and the criteria that teachers need to take into consideration during their assessment.