Predictability of the exams fiasco

Three weeks ago when I read about the general approach of the model developed by Ofqual and used by the exam boards to produce the A level and GCSE results this year I predicted the chaos and anger that would follow (look at my twitter feed if you don’t believe me).  The question I want to ask is if I could see this coming, why couldn’t Gavin Williamson, Boris Johnson, Ofqual and their advisors?  Maybe they did and didn’t particularly care?

Whatever approach was used to assess students at A level and GCSE there would be controversy and winners and losers. No system is perfect, however the approach adopted by the government has exacerbated this and, whether intentionally or not, penalised students from more disadvantaged areas.  I repeat – this could have been seen well in advance and the government and Ofqual could have acted to correct this if they’d wanted to.  It is all well and good me or others complaining about decisions the government make – that is easy and too often seen as virtue signalling.  What we also need is alternative approaches and I want to offer one the government could have used which I do think would be fairer to more students.  Firstly, I do think the model used and its limitations need to be explored.

The model used is skewed in favour of small sixth forms where subject entry numbers are small and skewed against larger sixth forms and subjects with large entries (my own subject of mathematics falls into this category) and schools that are on an upward trajectory in their annual performance.  With small entries, teacher estimates are used to create the results.  Where the entry is larger, teacher estimates are ignored by the model.  The only thing they consider is the ranking provided by the school.  For me as a teacher this is the most invidious part of the whole process where we’ve had to rank from most likely to least likely to get the grade estimated each student within each grade.  The model used has then, in most cases, already predetermined the spread of grades based on the last three years performance of a school.  All they do then is fit the rankings with the grade spread the model has assigned to that school.  Schools are only 2 or 3 years into a new specification which always leads to greater variability in performance of the first few years.  Some schools are on a path of improvement over time in their performance – some could have started this.  Schools often have particularly gifted cohorts in one particular year.  The model used totally ignores these quite common aspects that can affect school performance.

Using mock grades as an alternative (through an appeal) is a red herring.  Often mock grades are lower in mocks than grades attained in actual exams because students don’t revise as much or prepare as much for a mock exam (which normally doesn’t really matter) as opposed to the actual exam (which does matter).  Also, if the paper used for the mock is not an actual past paper sat in exam conditions, how can we tell the exam is rigorous enough to give a valid prediction?  At the same time using teacher predictions alone is also unfair to future and past cohorts.  These are generally inflated because teachers, understandably, err on the side of the students they teach, so give a best estimate.  This will have been curtailed somewhat by the leadership team at the school who must sign off on the estimates hence they will have done some internal moderation, but there will still be grade inflation.

In my opinion the best approach would be a mix of the model and teacher estimates.  Compare the two and where there is broad agreement between the teacher prediction and the model, accept whichever errs on the side of the student. Where there is a significant difference, the exam board should expect the school to evidence why. The exam board can then judge whether this is sufficient evidence to move towards the teacher prediction. If a school cannot provide evidence or sufficient evidence, the results from the model should be used.  Schools were expected to return their predictions in June and there was no exam marking to do (as would normally be the case) and so plenty of time for the exam boards to challenge schools where appropriate. Schools would have expected their grades to be challenged and should have been ready for this.

Instead the government have stuck to a model which favours private schools (small sixth forms and generally better results over the last few years) and works against the type of students they said were their priority when the broke through the red wall last December.

I don’t, for one minute believe that anyone has done this purposely to punish disadvantaged students (as some on the tribal left are already saying) but the chaos seen was totally predictable. Serious questions should have been asked by the Deparment for Education of Ofqual as it is the politicians job to see the wider contexts and then ensure those in Ofqual mitigate for them. Because of this, Gavin Williamson, who is ultimately responsible for this, should be seriously considering his position.

Leave a comment