KS3 Assessment: Performance, Practice and Pole Vaulting

Two years ago as an Assistant Principal in a London school I was asked by the head to provide a solution to ‘life after levels’.  I’m not very proud of what I came up with.  I suggested that we could pull down the new 1-9 GCSE grades into Key Stage 3, so that students are judged on the same criteria from the moment they walk through the school gates in Year 7 until the day they collect their final grades in Year 11.

I now see that my ‘solution’ contained all the flaws of levels with none of the benefits – at least levels were broadly understood as a vague proxy for students’ progress through each subject.  A tweet from one headteacher last week captures one of the issues with my proposal:


Two years on I’m still grappling with school leaders to provide an assessment system that focuses on the specific things that students can and cannot do, while also providing some of the more hard-nosed data that might enable patterns of progress over time to be identified.  I think the solution lies in recognising the difference between practice and performance (and I’m indebted to this brilliant presentation by Daisy Chrsistodoulou here).

The most important function of our assessment system is to provide feedback to students on their grasp of the specific, precise components of their subjects.  At United Learning we use KPIs to break each subject down into its component parts.  The KPIs provide a common language for the discreet knowledge, skills and understanding required in each subject, and they remind teachers and leaders that the most important function of assessment is to generate formative feedback. The vast majority of the feedback that our students receive in Key Stage 3 is focused on these component parts of each subject, captured in our KPIs.

Assessment at KS3 could stop there.  Ofsted have made it clear that it ‘does not expect performance and pupil-tracking information to be presented in a particular format … such information should be provided to inspectors in the format that the school would ordinarily use to monitor the progress of pupils in that school.’  This format could include showing actual improvements in actual work.  We’ve become so used to grades and levels that we forget that they serve as a model – a representation – of a student’s performance.  Sometimes it’s more helpful to focus on the actual work rather than the model – “The best model of a cat is a cat” (Nate Silver).

A KS3 assessment system which is rooted in the discrete components of each subject and which seeks evidence of progress in the actual work that students are producing would be a vast improvement on the level-driven approach that previously dominated.

But I think it’s reasonable that we tentatively ask more of our assessment system than this.  It’s reasonable that we want to know how our students are doing compared to their peers in other schools and compared to their own starting points.  It’s reasonable that we seek to identify variation between different subjects.  It’s reasonable that we seek to compare the progress of different groups of students so that we can address any gaps before it’s too late.  For this hard-nosed assessment information, our analysis needs to go beyond the progress students are making in the discrete elements of each subject, towards a more holistic judgement of their overall performance.   This is where we turn to summative assessments.

I’m sure I’m not alone in remembering that old rule from teacher training that formative assessment is assessment for learning, whereas summative assessment is assessment of learning.  Daisy’s presentation builds on this by making the distinction between practice and performance.  Formative assessment is interested in the ongoing practice of the component parts of each subject, whereas summative assessment involves a judgement of overall performance.

End of unit tests provide a basis for this judgement, and mark a shift in focus from practice to performance.  Take a Year 9 History unit on the suffrage movement.  Throughout the term students learn about the meaning of suffrage, the chronology, contemporary attitudes to women, the suffragettes, the suffragists, the First World War and the legislative process, alongside key skills such as drawing evidence from sources, comparing viewpoints and constructing concise sentences and paragraphs.  Having practised these elements lesson by lesson students sit a test which asks them to bring together all of these skills, knowledge and understanding into a holistic performance by writing a structured answer to an open  question such as ‘why were some women given the right to vote in 1918?’  Depending on the frequency of this summative test (2 or 3 per year seems about right) students would answer several other questions drawn from their work throughout the year.

As long as the whole year group sits the same test, and as long as the tests have been marked consistently within departments, we can compare the performance of students against their peers.  Knowing that I received 73% on my History test and that I placed in the 85th percentile of my year group is valuable and powerful information.  Grades and levels are abstract, whereas knowing my performance in relation to my peers is meaningful and motivating.

Again, assessment at KS3 could stop there.   Or we could tentatively take things a bit further by comparing students’ performance against an anchor point of age-related expectations  (ARE).  This will involve professional judgement as a subject specialist decides what percentage would constitute age related expectation on each summative chartassessment.  Once this has been determined, we can place students in different bands:

  • Significantly above age related expectations
  • Above age related expectations
  • On age related expectations
  • Below age related expectations
  • Significantly below age related expectations

In the example above we have chosen 5 bands from significantly below to significantly above.  We can link these 5 bands to starting points at KS2 and end points at KS4, e.g.


Under this model, we can track over time the proportion of students in each band.  This could be compared by class, year group, subject, SEN, Pupil Premium, Most Able etc.  Evidence of progress, as far as the school is concerned, would involve more students working at or above age related expectations than at a previous point in time.  Students and parents could receive the following information:

  • % score on last summative assessment
  • Performance within cohort (i.e. percentile in year group)
  • Band i.e. Sig Above > Above > On > Below > Sig Below
  • What they need to do to improve (using the language of the KPIs).

This approach to assessment at KS3 involves striking a balance between practice and performance.  It takes inspiration from the challenge faced by athletes.  Let’s take the example of a pole-vaulter.  Between tournaments, the pole-vaulter focuses on the components of the craft: the grip, the run-up, the plant, the take-off, the twist, the extension, the arch.  The pole vaulter’s coach doesn’t give out medals during training – the coach provides feedback on each of these discrete elements. Come tournament time, the focus shifts from these discrete elements towards the overall performance.  The feedback the athlete receives is not related to these elements, but to their performance, expressed on the stadium scoreboard by the height they clear and their success against their competitors.  On the training ground the following week, the focus returns to the discrete components of the craft, ahead of the next tournament.

I think we can learn from this at Key Stage 3.   An effective approach to assessment recognises the difference between practice and performance.  When the focus is on practice, we address the constituent components of each subject.   When the focus is on performance, we compare students with their peers and against an objective benchmark.

This isn’t the end of the story, but I hope it’s an improvement on my first attempt two years ago.

8 thoughts on “KS3 Assessment: Performance, Practice and Pole Vaulting

  1. A really interesting read, this will be of great use in planning our new Scheme for KS3 and its assessments.
    Two issues:
    1- I’m not sure that knowing your performance in relation to your peers IS motivating, and
    2- with so much curriculum change, it’s really difficult to know what expected achievement in each subject each year actually looks like. I guess as you say, the best we can do is look at our cohort, band it then look again next year and so on.

    My school asks for GCSE style grades from year 8 upwards. For maths, given that we can’t even do it for Y11, this is ludicrous. Our best option is as you describe: assume bands within the cohort and assume the entire cohort are doing as expected. Only when we get some GCSE grades to work back from (and I think 2017 won’t be enough as it will change when there is no need for comparable outcomes) will this even be remotely possible!

    Liked by 1 person

    • Thanks Chris, appreciate the comment. Telling a student how they are doing in relation to peers is an interesting one, but I’m comfortable with it. My view is that when students leave school at 16 they’re going to find out how they’re doing in relation to their peers so we might as well let them know while they still have a chance to do something about this. Agree that it’s tough to call Age Related standard at the moment – plenty of professional judgement involved but this will hopefully improve as we get more familiar with new curriculum and specs etc. Cheers, Steve


  2. This perfectly sums up the imperfect world we are trying to work through at the moment. Differences in assessment practices are perfectly acceptable but our systems need to have worth to the learners rather than convenience for the school. I feel 1-9 is, as you say, only capturing the worst of levels. Thought provoking read

    Liked by 1 person

  3. This is great Steve. We’ve been through almost the same process. My main problem with this (and hence with ours) is the explicit (it is implicit in ours, but still there) use of ‘expectations’ and hence the idea of a flight path. I want us to get rid of that, but am not sure we’re brave enough yet.


  4. Really useful, Steve, especially as an incoming United Learning subject leader. Like you, I take a lot of reassurance from Daisy’s clarity of thinking re assessment and the performance/practice dimension is essential here. At our Beyond Levels conference yesterday, we were addressed by Paul Crisp of CUREE, and he spoke about assessment for precision and for probability. Another interesting dimension- so now I’m wondering what will happen by putting the two together, and how this might enable some thinking about projections of progress. I’m thinking about how embedding KPIs precisely, in both practice and performance, (i.e., AfL and AoL,) can help in increasing the probability of a student achieving an ambitious target, and at which points along the way are we best placed to pinpoint a moment in the continuum in order to decide on our interventions. Hmmm. This’ll keep be busy for a while. Thanks!
    Lisa Pettifer


    • I think KPI performance has to be taken with a spoonful of salt – there is so much more to any question. QLA and similar are great as long as you feed back directly on the specific aspects of the exact question rather than assume a wider misunderstanding. They are a starting point for re-visiting work I think, rather than an end-point of a completed ticklist


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s