Quality assessment

In June and July each year, we mark around seven million exam scripts and issue nearly five million results to students.

We know that getting the right result is very important to students and teachers and we have put it at the heart of everything we do. And by 'right result', we don't just mean getting the right number of marks or grade – it has to be an accurate reflection of a student's ability too.

Here you can find out what we are already doing to get it right, as well as how we are continuously improving what we do.

You can also watch our short film which explains why marking some subjects is more challenging than others and what research tells us about how to make marking as accurate and reliable as possible.

Assessing ability

One of the strengths of the English exam system is that it allows us to assess students' abilities by asking them to do specific tasks as part of their exam or coursework. In the English system, the assessment is considered more valid than if students were only answering short factual or multiple choice questions.

Some of these tasks include:

  • writing prose
  • drawing graphs
  • constructing arguments
  • speaking a language.

Of course marking these tasks is not as straightforward as marking multiple choice questions, so we need examiners to use their expert, professional judgment to decide what mark best reflects the standard of the work.

To guide their judgements and make sure they are consistent, we set out the criteria for awarding marks in the mark scheme. We then check during the marking period that the marking meets the right standard.

Getting it right from the start

As we and the other exam boards have developed new GCSEs, AS and A-levels as part of the Government's reforms, we have also reviewed the design of our assessments and mark schemes.

Using evidence from research by our world-leading experts at AQA's Centre for Education Research and Practice (CERP), we have been able to make changes where we think they will bring about greater reliability and consistency of marking.

Some of this includes:

  • basic things, like having enough marks on a paper and not having questions no one can answer
  • more complex approaches, like understanding how the exam paper and the mark scheme work together.

Designing assessments that accurately test different ranges of ability means students are more likely to get the grade that reflects what they can do, and examiners are much clearer about the right number of marks to give the work they are assessing.

Getting the right people

As we are relying on examiners to use their professional judgement, we need to make sure we have the right people in place. Almost all our examiners are qualified teachers who are currently teaching or have recently taught the subject they are marking.

We sometimes recruit a small number of PGCE and PhD students to mark papers or a specific question, as our research shows that they do a good job. They are carefully selected based on their strong academic backgrounds and familiarity with the subject they are marking and have enabled us to expand our pool of high quality examiners.

All examiners and AQA colleagues who develop assessments go through rigorous training so they are up to date with the latest research and developments in assessment design.

How papers are marked

While examiners are experts in their subjects, they have different levels of experience when it comes to marking. We have strict quality controls in place and check marking along the way to ensure examiners mark to the right standard, whether marking is paper-based (traditional) or online.

Here is a brief overview of the process and quality controls we use.

Standardisation

Before marking starts, all examiners must successfully complete a standardisation process so they fully understand the mark scheme and where to award marks.

During standardisation, a panel of senior examiners marks a set of students’ answers. Examiners are then given these answers to mark, and their marks are compared with those agreed by the senior examiner panel.

If there are any differences, the examiner’s team leader will discuss the discrepancies with them. They will only be allowed to start marking their allocation of students’ work when their team leader is satisfied that they understand how to apply the mark scheme correctly.

Regular checking

In addition to standardisation at the start of the process, we monitor examiners‘ marking to ensure they understand the mark scheme and are applying it correctly. Depending on how the exam papers (scripts) are marked, there are different ways of carrying out this checking. Here is some of what we do.

Paper-based marking

  • A senior examiner checks the marking for the first 10 scripts to make sure the examiner is applying the mark scheme correctly.
  • When the examiner is halfway through their allocated scripts, the senior examiner checks another 15 marked scripts.
  • If the senior examiner has any concerns, they review additional papers and a decision is made as to whether the examiner is able to continue marking, or the papers should be given to another examiner to mark again.

Online marking

During online marking, we test a sample of examiners' work to check if they are applying the mark scheme correctly. We call this process 'seeding' and this is how it works:

  • the senior examiners agree the right mark for an answer
  • the question is given to the team of examiners to mark
  • the system compares the mark they give to the mark agreed by the senior examiners
  • if the mark is correct, the examiner is allowed to continue marking
  • if the mark is different, the system flags this to the examiner and their team leader
  • if the examiner is not applying the mark scheme correctly, they are prevented from continuing to mark
  • when this happens, the team leader contacts the examiner to discuss the differences before they are allowed to continue marking
  • if incorrect marking recurs, examiners can be permanently stopped and their marking given to someone else to mark again.

The seeding process is repeated throughout the marking period to help ensure that examiners stay on track.

Sometimes seeding isn’t the best option, for example for longer essay-type answers. In these cases, we use double marking - where two examiners mark the same answer - to check the quality of marking.  We need to allow for small, acceptable differences in professional judgement, which we call ‘tolerance‘, but if the marks differ by more than a small number, a senior examiner decides on the correct mark. As with seeding, examiners who are not applying the mark scheme correctly are stopped until their team leader is able to contact them and discuss their marking. Examiners can also be temporarily stopped while the other examiner or the senior examiner complete their marking.

We are moving more of our marking online, as our research shows that it increases reliability. For example:

  • different questions are sent to different examiners so the final mark for any exam paper is the professional judgement of a number of experts
  • we can monitor each examiner throughout the whole marking period.

After results are published, teachers and students who are unhappy with their marks can apply for a review of marking or other post-results services.

Review of marking and marking quality

The exam system operates on a large scale and involves an important element of human judgement. Over the past few years, there has been an increase in the number of requests for marking reviews, especially in cases where a student has missed a grade by only a mark or two.

We know that every mark and grade change is significant for students, and we’re doing everything we can to make sure our marking is as accurate and reliable as possible. But we do sometimes get it wrong, so we want to make sure that any reviews of marking provide a fair and consistent way to put things right.

In 2016, Ofqual introduced some changes to post-results services that mean marks can only be changed where there has clearly been an error in the marking or moderation.

Training and monitoring reviewers

All our reviewers are experienced, expert examiners and moderators, who marked or moderated exams in the summer series before the reviews. In November 2017, we introduced some enhancements to their training and to how we monitor their progress and performance throughout the marking review period.

To ensure that they fully understand the review of marking process and what is expected of them, we provide mandatory marking review training through an e-learning module as well as detailed guidance on the process. Reviewers also spend time re-familiarising themselves with the mark scheme and doing some sample marking for the specific assessment they will be reviewing.

We have a dedicated team which monitors reviewers’ progress and performance throughout the marking review period and provides any necessary support to ensure agreed timescales are met, and that marks only change where there is an actual marking or moderation error.

Improving quality of marking

Because we recognise how important getting the right result is for everyone, we are always looking to improve, using research evidence to build on what works well and learning from mistakes. This includes analysing the performance of all our papers, processes and examiners after each exam series and taking on board feedback.

Understanding how exams work

Read about how exams work and watch our animations explaining how: