A couple of blogs over the weekend reflected on the role of data in measuring schools now that norm-referencing and bell curves dictate the distribution of exam grades. In Rethinking Success in the Post-Gaming Zero Sum Era Tom Sherrington argues that school success will be more about the things that we can’t measure, rather than the data-driven metrics that currently keep school leaders awake at night. It’s a theme picked up by David Didau, who argues in Where Now for School Improvement? that the “agenda for school improvement has to move away from endlessly pouring over data looking for patterns that don’t exist”
Didau and Sherrington rightly point out that much of what has passed for school improvement has been illusory, based on tactical gaming and statistical quirks. They are correct to remind us that we now play an almost zero sum game where “roughly 40% of students will get grade 4 or below and there is nothing schools can do about it.”
Except, perhaps, make sure that their kids are not amongst the 40%?
The obvious response is that not all schools can ensure that all of their students sit above the bottom 40%, and of course they can’t. But if every school secured genuine gains in students’ knowledge and understanding, how would our ‘almost zero sum’ system react?
At least in theory, our exam system is able to recognise genuine gains in improvements made by whole cohorts. As I understand it, it is Ofqual’s job to ensure that a C grade this year is a similar standard to a C grade last year. It follows that if all students genuinely produce work of C+ standard, then they can get awarded as such (of course this is exceptionally unlikely). The introduction of the National Reference Test supports this drive to recognise genuine gains achievement across a whole cohort.
Yet even if grades were awarded purely against a bell curve, with 40% therefore gaining less than a C even if they produce better work than sub-C students in previous years, we can hope that these students possess the basic skills and knowledge needed for their future success, albeit without the grades to show for it.
Didau quotes an example provided by Jack Marwood:
Here is Seaside Primary School in North Yorkshire*, a fairly typical two-form entry school. These are the percentages of children achieving level 4 or above in reading, writing and maths:
- 2013: 77%
- 2012: 70%
- 2011: 58%
- 2010: 69%
- 2009: 77%
- 2008: 76%
There is no pattern. Unless a school consistently records 100%, there never is a pattern for any school, in any historical data. This is because the data is based on children’s results, and children are complicated and individual, and the school population in any given school is statistically too small to make meaningful generalisations.
I’m no statistician, but by Marwood’s reasoning I presume we’re unable to criticise the poor records of individual hospitals and clinics, let alone doctors and nurses, since any given hospital “is too small to make meaningful generalisations.” And what about a dangerous stretch of road which causes several accidents – “too small to make meaningful generalisations” – or a restaurant with a nasty hygiene record: “too small to make meaningful generalisations”?
Didau adds: “Stupidly though, the government is still insisting that schools need to be above average to avoid being labelled as failing. Schools will tear themselves apart looking for the latest silver bullets but there are none. If a school does especially well in one year – or even two – results will inevitably regress to the mean. No amount of grit or growth mindset can resist this mathematical bulldozer.”
I’m not sure this is entirely fair. From this summer, the floor standard will be applied to schools with a Progress 8 score of less than -0.5. In these schools students score, on average, half a grade worse across all of their subjects than students with similar starting points in other schools.
There is so much more to a school than its Progress 8 score, but if my kids were at a school where they made half a grade less progress than their peers in other schools, I would want to know about it, especially if this lack of progress occurred year after year. Of course it wouldn’t mean that the school’s leaders and teachers were uncaring or incompetent, but I dare say they could be doing a few things better.
Student outcomes are not statistical quirks. They are real grades of real kids. I agree 100% with Didau and Sherrington that quick wins are less likely in the new system, and we need to take a careful, rounded view of a school’s performance before we even begin to draw conclusions. But I still think that student outcomes have a fair bit to tell us about whether or not a school is doing its job. If it’s a question of being statistically accurate or asking genuine questions about repeated under-performance, I know which side I’m on.
A lesson in PhD statistics will be scant consolation to a kid who leaves school without GCSE maths after 11 years of mathematics tuition. And they might not appreciate the irony if they’ve also failed their English GCSE.
Kids deserve better than exam-factory schooling, where educational success is equated solely with exam success, but they also deserve better than a school which dismisses under-performance as a ‘statistical quirk’ and takes a fatalist approach towards exams – “roughly 40% of students will get grade 4 or below and there is nothing schools can do about it.”
There’s a spreadsheet on my laptop which shows the ten-year trend in GCSE results of the schools I work with. The columns on the left of the spreadsheet show that a decade ago several schools repeatedly saw less than 20% of students leave with 5 good GCSEs. Now these schools, which are universally in challenging areas, repeatedly find themselves exceeding the national average. This means thousands of students leaving school with genuine life chances which were denied to their elder siblings just a few years before. We should celebrate this.
Of course we need to recognise the limits of data, but my concern about an anti-outcomes narrative is that it can easily be used to justify and ignore entrenched educational failure. I believe that with 11 years of good teaching, with the right curriculum and consistently good behaviour, our most disadvantaged kids can compete with their more privileged peers.
I’m reminded of a line in Andrew Adonis’s 2012 book Education, Education, Education. Adonis describes a visit to a failing school in the North-East, and his incredulity when he heard this from one of the teachers: ‘“Twenty years ago”, he said, “when the boys left here, they walked down the hill and turned left to get a job in the shipyard or right to go down the mines. All those jobs have now gone. They might as well walk straight into the sea”. I didn’t know how to respond. It seemed too obvious to say that, if they got a decent education, they might prosper on dry land’.
I worry about a similar analysis now: “Five years ago, the boys could have got C grades, but now that 40% of grades are always below C, there’s nothing we can do.”
Twenty five years since the emergence of league tables I think we’ve finally got a system (thanks to Progress 8 and tough terminal exams) which shows up real school improvement. I hope that Ofsted and Amanda Spielman continue to challenge schools which persistently fail to ensure that students depart with their pockets full of decent grades.