The short answer is I can’t tell you, but to be a Pupil Premium reviewer, you have to go through a process of quality checking. School leaders who wish to become pupil premium reviewers must provide evidence of having improved the achievement of disadvantaged pupils in schools they have led or supported closely ( https://www.gov.uk/guidance/pupil-premium-reviews-a-guide-for-nles#who-can-apply ). Examples of the evidence can include:
- above average progress for disadvantaged pupils:
- 86% in reading, 89% in writing and and 84% in maths making expected levels of progress at key stage 2 ( 2014 3-year rolling average)
- 56.5% in English and 51.4% in maths making expected levels of progress at key stage 4 (2013 3-year rolling average)
- above average attainment for disadvantaged pupils:
- 64% achieving level 4+ in reading, writing and maths at key stage 2 (2014 3-year rolling average)
- 38.7% achieving A*-C in English and maths at GCSE at key stage 4 (2013 3-year rolling average)
- at least 6 disadvantaged pupils in the relevant cohorts being tested
If you are successful in applying, you can charge what you like for conducting a pupil premium review with a suggested daily rate of £300-500 per day and a suggested length of 2-4 days. That means anything between £600 and £2,000. That’s a lot of money!
So how good are these pupil premium reviewers at improving the achievement of disadvantaged pupils? Not an easy question to answer, but perhaps one could start by looking at the gap between the disadvantaged and others in their own schools. I went to the National College for Teaching & Leadership and interrogated their online directory of Pupil Premium Reviewers. There were 630 individuals offering Pupil Premium Reviews, of which 206 were secondary based which is my phase of interest. Of these, there were 189 linked to schools with some data about the gap in 2015.
I chose value-added for the best 8 GCSEs or equivalents, a measure which isn’t dominated by C/D boundary issues or dominated by English & Maths progress, and I calculated the difference between VA for the disadvantaged & others using the recently published performance tables for 2015.
The good news was that, on average, these 189 schools were doing better than the total population of some 3000 schools for which I had data. But not all of them were doing well. I calculated the deciles which divide the total population into ten equal parts and checked how well these Pupil Premium Reviewers schools were doing? A random sample of 189 schools would have about 19 in each decile but surely these schools would all be in the top two or three deciles? The chart below shows they were not.
While 68% were above the median value of all schools, 32% (61 schools) were actually below the national median. The top three deciles held only 81 of the 189 schools, less than half of the schools quality assured by the National College to charge up to £2,000 for a Pupil Premium Review.
How can this be? The schools must have been doing well for their headteachers to be recognised as pupil premium reviewers. Why are some of them underperforming in the very thing they are stamped competent to advise in?
The answer is pretty simple and comes in two parts.
Small cohorts and regression to the mean
123 of these schools (65%) have less than the national average of disadvantaged children in their 2015 data. Some of them have very small proportions of disadvantaged children (36 schools have 10% or fewer). When cohort sizes are small, the individual nature of the cohort dominates. With small cohorts, the outcomes of each cohort varies much more and doing well in one year doesn’t necessarily lead to doing well in the next. A particularly good performance in one year is actually more likely to be followed up by a more average performance in the next. This is a classic statistical effect known as “regression to the mean”.
As an illustration of this, see the (absence of) correlation between the value-added gap between the disadvantaged and others for our pupil premium reviewers in 2014 and in 2015. The correlation coefficient is +0.29 indicating very low correlation indeed. In other words, being good at closing the gap in 2014 isn’t linked at all to being good in 2015.
Most people, and in particular, most education ministers assume that there are “good” schools and “bad” schools and it’s not possible to go from “good” to “bad” or vice versa quickly without “super” leadership. Many of these “super” pupil premium reviewers can’t keep their own schools performing well from year to year. What is perhaps even worse, there were many below the median for two years running – what credentials do these people have to advise on closing the gap?
We’re measuring the wrong things
The difference in achievement between the disadvantaged and the others isn’t the right indicator. The most influential predictor of KS4 outcome is the prior attainment at KS2. If we measured the difference in achievement between the disadvantaged and the others after having taken into account their prior attainment, we would have something worth checking at secondary level. If School A happens to get its 30 disadvantaged children with relatively high KS2 scores, it will probably have a small gap at KS4 because it already had a small gap at KS2. Similarly School B with its disadvantaged children on low KS2 scores will have a large gap at KS4. It isn’t the size of the gap but whether it widens or narrows from age 11 to age 16 that matters, and that is what a secondary school can influence. Our current performance indicators don’t recognise this at all. Of course there are some magnificent schools with big gaps at KS2 and small ones at KS4, but you can’t tell that from the current performance tables.
So should you employ a pupil premium reviewer?
It’s your money. But if I were you, I’d check their gaps in the performance tables before handing over any cash. After all, it is the elixir of equality you’re after and not snake oil.
Why not take a look at the Swarm which helps you do this easily?