The national literacy and numeracy strategy (NLNS) evaluation: part two
The University of Toronto team which is evaluating the implementation of the NLNS has now produced a second report (Watching and Learning 2; Earl, L. et al., Ontario Institute for Studies in Education, University of Toronto, September 2001: Published by DfES, London; ISBN 1 84185 5537. On publication of their first report in June 2000 I wrote a critique to which the authors responded with a rejoinder from myself (see commentary on first report). At the time the schools' minister, Estelle Morris (now secretary of state for education), issued a press release claiming success for the government's strategy on the basis of the evaluation report's findings.
This time there has been no press release. One reason for this apparent lack of interest may lie in the somewhat more critical stance taken by the present report, even though, on the whole, it still reports favourably on the NLNS.
In Lorna Earl's response last year she indicated that her team would ask teachers about how the strategy was affecting them and would also look at any side effects; the team now has attempted to do this. While the new report does present a more sceptical view it does retain some of the problems of the previous report.
One of the issues in the first report was the team's implicit assumption that changes in test scores could be used to judge success of the NLNS. In the second report this notion is reiterated; children at all levels have improved (P. 19), without any recognition that such statements are highly problematic and with only passing reference (e.g. P. 34) to other interpretations of increasing test scores. Thus, in the executive summary the authors state (P. xi) the proportion of pupils achieving the expected levels on Key Stage 2 national assessments remains the most visible public indicator of the success of the strategies . The term success is used throughout the report without any clear definition of what it means or how one might objectively record it. As with the first report there is no mention of the fact that the government effectively undermined two controlled evaluations of the strategies at the outset, which might have provided good evidence about the effectiveness of the strategies. Nevertheless, the present report states categorically that the NLNS initiative is successful (P. 89).
As in the first report the Toronto team fails to address the debate about the misuse of data inherent in the publication of school league tables and the ways in which this can and does distort the curriculum. They go further in actually praising the government, saying England is well positioned to offer a model of the use of data for wise educational decisions (P. 87).
Some of the early concerns about the NLNS are now discussed. The side effect issue is tackled and the team's report expresses concern over NLNS squeezing out other aspects of the curriculum (see e.g. P. 83). The team also report concern about teaching to the test although they rather optimistically seem to think that such concerns may disappear as the strategies become embedded and schools recognise pedagogical advantages (P. 82).
The team's discussion of perceptions about the NLNS is based partly upon a survey of head teachers. They also carried out a survey of class teachers but do not report this due to a very low response rate (20%). Yet the headteachers' survey only had a response rate of 50% so that findings from this need to be treated with some caution and perceptions could be either much worse or indeed much better than those reported here. The new survey to be carried out in 2002 hopefully will achieve a reasonable response rate.
This second report, gratifyingly, does address some of the shortcomings of the earlier one. The team has also been concerned to consult more widely with researchers in England and two useful seminars were held in early January 2002 to discuss their reports. The final report of the project will hopefully reflect reactions to and criticisms of the earlier reports. In particular I would hope that the team pays particular attention to the following:
- The need to contextualise the NNLS within the prevailing political climate and the plethora of initiatives emanating from central government.
- The problematising of certain key notions such as interpretations of test score trends over time and what precisely the tests are measuring. In particular, a recognition that literacy and numeracy are contestable concepts and that insight into what is meant by them can be gained through an analysis of the tests that purport to measure them, especially those at Key Stage 2.
- A detailed discussion of the issues about teaching to the test and how high stakes testing regimes can distort the very processes that are being measured (see the recent RAND study on the so called Texas miracle of rapidly improving test scores in response to a very high stakes testing and reward scheme - Klein, S. P., Hamilton, L. S., McCaffrey, D. F. and Stecher, B. M. (2000). What do test scores in Texas tell us? Education Policy Analysis Archives 8: 1-21.)
Finally the team should go through their report carefully and make sure that they do not promote the idea that their study in any way can comment upon the success of the NNLS strategies in raising standards .