A response to Hargreaves on 'evidence-based educational research'
David Hargreaves' TTA lecture (Hargreaves, 1996), the responses from Norris, Gray and Edwards and especially Hargreaves' rejoinder, in Research Intelligence, make an interesting but really rather depressing read. The original attempt to draw an analogy between educational and medical research seems to have been dropped, with David Hargreaves admitting that he did it 'merely to add force (and colour) to the argument', and his continued promotion of OFSTED as a potentially big player in the research field flies in the face of present evidence about the competence of OFSTED so to do (Mortimore and Goldstein, 1996).
Before I get too critical let me state some agreements with David.
Evidence based educational practice is clearly desirable and I have no doubt that we need more. I certainly agree that there is a great deal of second and third rate research slopping around: I too referee articles and research proposals. I do wonder, however, whether the proportion is very much different from other fields such as medicine, where also I have personal experience. A democratic forum where practitioners, policymakers, and researchers could meet to debate seems desirable, but its responsibilities, powers and support levels would need to be worked out in detail before I would wish to commit firm support. More specifically, we do not need anything like a special Education Research Council of the kind advocated by Barber (1996), responsible for most educational research spending, isolated from the rest of social science research and driven largely by current Government priorities.
A major problem with this debate has been its terms of reference, as defined by the TTA lecture. For me there are two crucial issues which seem not to have been addressed. One is what counts as evidence and the other is how that evidence used. Let me elaborate.
In discussing what we mean by evidence it is useful to distinguish evidence which helps to explain whether and why things occur and evidence which rules out certain explanations or courses of action. The debate has been about the former: an example would be evidence about the learning effects of reducing class size and the factors which further inhibit or promote learning in smaller or larger classes. Another example would be evaluations of particular schemes, such as reading recovery. Such evidence can be used, for example, to inform resource allocation and teaching. In both these cases some good, if limited, evidence is available.
The second kind of evidence is, in many respects even more important. Researchers rightly spend time criticising the quality of existing research and practice, in education as much as in other fields. We need to know what doesn't work, what is logically inconsistent and what should actually count as well established evidence. In my view we know quite a lot about these things. One only has to look at the work which demonstrates the fallacy of confusing performance indicators based upon raw exam results with the quality of education in schools. If such evidence was acted upon by policymakers we would be spared many time wasting and expensive activities, to the general benefit of teaching and learning.
David claims that we have too little to show for an expenditure of £50m a year on educational research in this country. That may be true, but it seems pointless to indulge in wrangling over the point. The real issue is how much we would need in order to demonstrate some really big increments to knowledge about how and why things occur, and here I believe we will find that the above amount is actually far too small! For example, the major study on class size effects, the STAR project in Tennessee (Word et al., 1990), spent $11m all on its own and arrived at some useful conclusions. Nevertheless, as with much research, the results also indicate further areas of study where even more money needs to be spent to answer further interesting questions. Given what is spent in other areas, education is rather poorly served and I suggest that we might profit greatly from a debate, perhaps in David's research forum, about what would be a realistic expenditure on research for high quality knowledge production.
In short, the debate so far has been too narrowly focused. We need good evidence, but more immediately we need better ways of using all the evidence we have currently about what does and doesn't work. It would certainly be very encouraging if those who now make decisions, and those who would wish to make decisions, could set a good example. Sadly, present evidence provides scant support for such hopes, yet without that political commitment the research community will find the going very tough indeed.
References
- Barber, M. (1996). How to do the impossible: a guide for politicians with a passion for education. Inaugural Professorial lecture: London, Institute of Education.
- Hargreaves, D. H. (1996). Teaching as a research based profession: possibilities and prospects. London, Teacher Training Agency.
- Mortimore, P. and Goldstein, H. (1996). The teaching of reading in 45 Inner london primary schools: a critical examination of OFSTED research. London, Institute of Education.
- Word, E. R., Johnston, J., Bain, H. P., Fulton, B. D., et al. (1990). The state of Tennessee's student/teacher achievement ratio (STAR) project: Technical report 1985-90. Nashville, Tennessee State University.
by Harvey Goldstein