8 May 2012
Multilevel statistical models developed by Professor Harvey Goldstein and Dr George Leckie show school league tables to be unreliable guides to school choice.
Dr George Leckie, Lecturer in Social Statistics at the Graduate School of Education explains, ‘In the 1990s, league tables were only based on the percentage of children getting five A* to C grades at GCSE, but that is an unfair measure of school quality, as schools differ hugely in the ability of their student intakes, with some schools starting off with much higher-achieving pupils than others. You can’t use the raw exam results as a measure of school quality, because you’re not starting with a level playing field.’
So how do you produce fairer and more representative league tables? This is where the world of multilevel modelling enters the classroom. Multilevel models give statisticians tools to analyse individual behaviour taking account the different hierarchical contexts within which individuals operate. In the case of schools, you have students at the first level, at the second level are schools, and at the top level are the local authorities within which schools operate. By taking into account the fact that students do not learn independently and that their behaviour is influenced by the characteristics of their peer groups, teachers schools and local authorities you can go some way to providing a more well-rounded picture of a school’s performance in a league table.
In 2006, the government introduced a ‘contextual value-added’ school performance measure to their league tables, derived from a multilevel model. This measure takes account of the differing achievements of students entering the school, as well as adjusting for a range of ‘contextual’ factors such as eligibility for free school meals and lack of spoken English at home.
Sounds good so far. But the situation is still far from perfect, particularly when it comes to school choice. Leckie and Goldstein have shown that there is great statistical imprecision in the league tables which means that you struggle to separate one schools’ performance from another’s. ‘This is because the numbers of children in any particular calculation in a given year (say, 200 pupils per school) are too small to make precise comparisons’, says Leckie. ‘And if you’re a parent, you’re not interested in last year’s exam results, on which the tables are based, but in those six years ahead when your child will be taking exams. So what you need to know is how well last year’s results predict those in the future.’ Leckie and Goldstein’s models show that when you do those calculations, you can barely distinguish between schools.
So while multilevel models have given us improved measures of school effectiveness, maybe their ultimate lesson is that league tables can only ever be a part of the solution when it comes to selecting schools for our children. As Leckie concludes, ‘Statistical uncertainty is a fundamental aspect of communicating school performances, but is all too often ignored by the media and public.’
Leckie, G. and Goldstein, H. (2011) ‘Understanding uncertainty in school league tables’ Fiscal Studies, 32, 207-224.
Dr George Leckie is a Lecturer in Social Statistics at the Graduate School of Education. He researches various aspects of education and school effectiveness, including the quality of marking of England’s national curriculum key stage educational tests, and social and ethnic segregation among schools and neighbourhoods. He is perhaps best known for his work with Professor Harvey Goldstein highlighting the limitations of England’s school league tables.