Value added - a comment on the views of Woodhead
Chris Woodhead, the head of OFSTED, has welcomed the use of a 'value added' perspective in school performance (league) tables (Times Educational Supplement, 20/11/98). He indicates that it is not easy to 'arrive at statistically secure calculations (of value added)' and points out that proxy measures such as free school meals are inadequate. He also claims that the use of value added measures should supplement rather than replace 'raw' league tables such as we have at present. None of the views put forward by Woodhead are new, and in particular his justifications for retaining comparisons between schools based upon raw data have been well rehearsed. Nevertheless it would be of some interest if the head of OFSTED really had accepted the usefulness of value added scores as the principal means by which schools can be judged fairly in terms of the progress made by their pupils during their stay at those schools.
I will examine Woodhead's main arguments, and in so doing expose a number of confusions and misunderstandings of which he appears to be unaware, and which also happen to be present in some of the current debate about the use of value added measures.
The research on value added measures, which arises from work on school effectiveness, has shown not only that raw, unadjusted, measures are inadequate but also that there are inherent imitations in the use of either raw or value added measures when it comes to comparing schools. These limitations arise from the 'statistical sampling error' which is a consequence of the relatively small numbers of pupils in each school being compared and it means that league table rankings, value added or raw, are unreliable and cannot be used to make comparisons among most schools. The very best that can be done is to use a ranking as a screening instrument to identify schools doing particularly badly or well for particular subject areas for particular groups of pupils so that they can be investigated further. This limitation is inherent and means that they provide very limited information indeed for anyone wishing to use them as devices to choose among schools. Moreover, it is now apparent that the 'side effects' associated with league tables are so severe that there can be little justification for presenting these as being socially beneficial, as the government has done and as Woodhead advocates. The most successful use of such rankings is not in published tables but in 'school improvement' schemes such as in Hampshire and Lancashire, where results are fed back to individual schools in sensitive ways which do not involve inappropriate public comparisons. It seems strange that Woodhead appears to be ignorant of this important work, since it has been much discussed both within and outside government circles.
Woodhead does point out that pupil mobility raises difficult issues for value added analysis. For example, at Key Stage 2 many schools will contain pupils, only a minority of whom will have been there at Key Stage 1. We need to know which schools, and for how long, pupils have been to between key stages in order properly to assign progress measures. What he fails to understand, however, is that this is just as difficult a problem, for the same reasons, for raw league table comparisons. This problem has not been solved and those who would use it as an argument against value added measures are also thereby arguing against the present performance league tables. Yet Woodhead quotes with approval the comparison of raw results 'in an absolutely straightforward way'. Unfortunately it is just not possible to do this and the head of OFSTED should know better than to pretend that it is.
Woodhead's final point is that if we go down the 'value added' road this will lead some heads and teachers to have low expectations of their pupils and hence depress performance. His argument, however, is very confused. He equates a statistical prediction with a crude determinism. It is quite rational to admit that children who have low starting achievements will tend, on average, to have low final achievements. All the research shows this to be the case. It is quite another matter to use this as a rigid deterministic formula for supposing that any particular initially low-achieving child will therefore finish up as a low achiever. Children vary enormously and again the research shows that many initially low achievers will 'catch up' and vice versa. The whole point of teaching is to enable each child to progress as far as they can, given where they start from, and understanding the general relationship between their initial and final achievements is something that can assist this process. Yet Woodhead suggests that making comparisons between schools on the basis of their raw unadjusted results somehow avoids depressed expectations. On the contrary, it is precisely the inequity of such comparisons which has caused many of the problems with expectations.
In short, Woodhead's half hearted espousal of value added measures can only succeed in creating confusion rather than clarity. It is of some concern that the head of the organisation with a major responsibility for maintaining standards of teaching and learning in schools betrays such a lack of real understanding of the topic he writes about.