Two years ago as an Assistant Principal in a London school I was asked by the head to provide a solution to ‘life after levels’. I’m not very proud of what I came up with. I suggested that we could pull down the new 1-9 GCSE grades into Key Stage 3, so that students are judged on the same criteria from the moment they walk through the school gates in Year 7 until the day they collect their final grades in Year 11.
I now see that my ‘solution’ contained all the flaws of levels with none of the benefits – at least levels were broadly understood as a vague proxy for students’ progress through each subject. A tweet from one headteacher last week captures one of the issues with my proposal:
Two years on I’m still grappling with school leaders to provide an assessment system that focuses on the specific things that students can and cannot do, while also providing some of the more hard-nosed data that might enable patterns of progress over time to be identified. I think the solution lies in recognising the difference between practice and performance (and I’m indebted to this brilliant presentation by Daisy Chrsistodoulou here).
The most important function of our assessment system is to provide feedback to students on their grasp of the specific, precise components of their subjects. At United Learning we use KPIs to break each subject down into its component parts. The KPIs provide a common language for the discreet knowledge, skills and understanding required in each subject, and they remind teachers and leaders that the most important function of assessment is to generate formative feedback. The vast majority of the feedback that our students receive in Key Stage 3 is focused on these component parts of each subject, captured in our KPIs.
Assessment at KS3 could stop there. Ofsted have made it clear that it ‘does not expect performance and pupil-tracking information to be presented in a particular format … such information should be provided to inspectors in the format that the school would ordinarily use to monitor the progress of pupils in that school.’ This format could include showing actual improvements in actual work. We’ve become so used to grades and levels that we forget that they serve as a model – a representation – of a student’s performance. Sometimes it’s more helpful to focus on the actual work rather than the model – “The best model of a cat is a cat” (Nate Silver).
A KS3 assessment system which is rooted in the discrete components of each subject and which seeks evidence of progress in the actual work that students are producing would be a vast improvement on the level-driven approach that previously dominated.
But I think it’s reasonable that we tentatively ask more of our assessment system than this. It’s reasonable that we want to know how our students are doing compared to their peers in other schools and compared to their own starting points. It’s reasonable that we seek to identify variation between different subjects. It’s reasonable that we seek to compare the progress of different groups of students so that we can address any gaps before it’s too late. For this hard-nosed assessment information, our analysis needs to go beyond the progress students are making in the discrete elements of each subject, towards a more holistic judgement of their overall performance. This is where we turn to summative assessments.
I’m sure I’m not alone in remembering that old rule from teacher training that formative assessment is assessment for learning, whereas summative assessment is assessment of learning. Daisy’s presentation builds on this by making the distinction between practice and performance. Formative assessment is interested in the ongoing practice of the component parts of each subject, whereas summative assessment involves a judgement of overall performance.
End of unit tests provide a basis for this judgement, and mark a shift in focus from practice to performance. Take a Year 9 History unit on the suffrage movement. Throughout the term students learn about the meaning of suffrage, the chronology, contemporary attitudes to women, the suffragettes, the suffragists, the First World War and the legislative process, alongside key skills such as drawing evidence from sources, comparing viewpoints and constructing concise sentences and paragraphs. Having practised these elements lesson by lesson students sit a test which asks them to bring together all of these skills, knowledge and understanding into a holistic performance by writing a structured answer to an open question such as ‘why were some women given the right to vote in 1918?’ Depending on the frequency of this summative test (2 or 3 per year seems about right) students would answer several other questions drawn from their work throughout the year.
As long as the whole year group sits the same test, and as long as the tests have been marked consistently within departments, we can compare the performance of students against their peers. Knowing that I received 73% on my History test and that I placed in the 85th percentile of my year group is valuable and powerful information. Grades and levels are abstract, whereas knowing my performance in relation to my peers is meaningful and motivating.
Again, assessment at KS3 could stop there. Or we could tentatively take things a bit further by comparing students’ performance against an anchor point of age-related expectations (ARE). This will involve professional judgement as a subject specialist decides what percentage would constitute age related expectation on each summative assessment. Once this has been determined, we can place students in different bands:
- Significantly above age related expectations
- Above age related expectations
- On age related expectations
- Below age related expectations
- Significantly below age related expectations
In the example above we have chosen 5 bands from significantly below to significantly above. We can link these 5 bands to starting points at KS2 and end points at KS4, e.g.
Under this model, we can track over time the proportion of students in each band. This could be compared by class, year group, subject, SEN, Pupil Premium, Most Able etc. Evidence of progress, as far as the school is concerned, would involve more students working at or above age related expectations than at a previous point in time. Students and parents could receive the following information:
- % score on last summative assessment
- Performance within cohort (i.e. percentile in year group)
- Band i.e. Sig Above > Above > On > Below > Sig Below
- What they need to do to improve (using the language of the KPIs).
This approach to assessment at KS3 involves striking a balance between practice and performance. It takes inspiration from the challenge faced by athletes. Let’s take the example of a pole-vaulter. Between tournaments, the pole-vaulter focuses on the components of the craft: the grip, the run-up, the plant, the take-off, the twist, the extension, the arch. The pole vaulter’s coach doesn’t give out medals during training – the coach provides feedback on each of these discrete elements. Come tournament time, the focus shifts from these discrete elements towards the overall performance. The feedback the athlete receives is not related to these elements, but to their performance, expressed on the stadium scoreboard by the height they clear and their success against their competitors. On the training ground the following week, the focus returns to the discrete components of the craft, ahead of the next tournament.
I think we can learn from this at Key Stage 3. An effective approach to assessment recognises the difference between practice and performance. When the focus is on practice, we address the constituent components of each subject. When the focus is on performance, we compare students with their peers and against an objective benchmark.
This isn’t the end of the story, but I hope it’s an improvement on my first attempt two years ago.