Hattie’s research: Is wrong Part 2

This posting is the second in the series about the Hattie’s research being wrong.

But how to get at that research?

Hattie’s claims for it are so gigantic and, I believe, so wrong as to represent a wrongness so overwhelming as to provide a defence in itself of near apparent impregnability. This posting argues that Hattie, right from initial design, gets his meta-meta analysis research terribly wrong.

The variables within Hattie’s meta-meta analysis are so fantastically out of control and so resistant to valid combination as to make the extremely dodgy and biased nature of the 800 meta-analyses, on which the meta-meta analysis is based, of comparatively minor significance. The trick in popping Hattie’s bubble I believe is starting at the beginning and mainly staying there – any movement to what follows should only be to give backing to the analysis of what went wrong at the beginning.

[Explanation: when I refer to meta-meta analysis I refer in the singular, as Hattie is the only person in education to have used that research method (Nietzsche or Eddie the Eagle); meta-analyses in education are rather more common and are referred to in the plural. In education, both forms share the same faults, it is just with Hattie’s meta-meta analysis they are multiplied exponentially. There are some education meta-analyses, those modelled on the medical approach, to be respected.]

The academic developers of meta-analyses claim objectivity on the basis of having applied a common measure of effects but under objective examination the claim cannot be sustained. In contrast, medical meta-analyses, from which their use in education has been borrowed, have an accepted and respected place in research. But meta-analyses only work when, as is characteristic of medical meta-analyses, there are straightforward matters to sort out. Meta-analyses can work in education too, in matters like truancy or games children play in free time, but that is not where meta-analyses have been taken in education. In education, meta-analyses have been taken into the highly complex, value-laden, and problematic matters that are characteristic of education, and been overwhelmed. But that is minor in comparison with the disaster that is Hattie’s meta-meta analysis.

Hattie makes no claim to objectivity in his close definition, or to have applied a common measure of effects; his claim, instead, is that no matter what the 800 meta-analyses throw up, he can synthesise everything, combine them into a valid complex whole. The key word is synthesise.  With that one word synthesise, Hattie has managed to deceive himself and a good number along with him into accepting as of education value something that is dangerous nonsense. Freed from the burden of objectivity in all that is done in the name of research he is in stampede.

When challenged about the nature and number of variables and ways of undertaking the measurement of education influences in the meta-meta-analysis, no worries he says – through what amounts to mathematical formulae and other interpretations, he can combine all into a complex whole. Of course he can’t and hasn’t. What he has wrought is one of the great disasters of the education world, particularly in having his education ideas picked up and declared gold – the real thing – by echelons of politicians and education bureaucrats.

I have found Hattie writing that getting education innovations to classrooms is too important to fuss around with research concerns here and there – the thing is to get on with it. And with his meta-meta analysis he has and to devastating effect.

[This posting is not, of course, an academic article, and even if I had the inclination or ability to write one, that isn’t what is needed here. If this posting is anything other than a posting then, at a considerable stretch, I would call it an essay. I know academics have a categorisation for such writing; as I have for a good bit of theirs – but enough.]

But why, if Hattie is so wrong, has he not had to face up to it?

That is a question to be answered by his other quantitative researchers – all I would say is that Hattie’s rock star quantitative status and the constricted and measurable way he sets up education, provides his other quantitative academics with a readymade stellar status for their own career ambitions.

From a few recent academic articles in which Hattie has written and some garbled internet responses, my interpretation is that there is an awareness by him of the faults in his meta-meta analysis and his presentation of results – but he is running for cover.  There is a dodging here and there, an issuing of strategically positioned disclaimers, and articles on meta-analysis designed to throw critics off the scent.

Mr Hattie: there may be something in meta-analyses, but only if the variables are controlled and compatibility of procedures and outcomes established. In a recent contribution to a book (A Companion to Research in Education) you write ‘We grow from how we combine seemingly disparate bits of information and make meaning out of the collective.’ So that is research is it Mr Hattie, producing ‘seemingly disparate bits of information’ that can then be safely left it in the hands of academics to bring together? Well, Mr Hattie you certainly achieved the former, a pity about the latter.

Hattie’s attitude towards the fantastic array of seemingly disparate variables and procedures and outcomes seems to be, don’t worry about them till after. He is apparently insouciant about it all: the definitions of the concepts involved (for instance, ‘whole language’); the age of the children or whether they were children at all (in other words, adults); the socio-economic status, culture, or country of the children; whether the children were normal or special class; whether it was a classroom or clinical situation; the nature of the curriculum area; the part of the curriculum area tested; whether the application was marked immediately, or was left some time, even a long time to test real understanding;  whether there was a control group or not. And Uncle Tom Cobley and all.

Mr Hattie, we cannot tolerate having research inflicted on us defined as ‘seemingly disparate bits of information’ that are combined, not by the control of the variables and procedures within research, but by academic researchers following it. However, in fact, you were very much in at the beginning weren’t you? cramming the meta-meta analysis ballot box with breath-taking bias. Right from the beginning you took control of the situation by deciding what you were willing to allow; the kind of ‘disparate bits of information’ you were willing to admit. With a few taps of your computer you ruled out any learning evidence that wasn’t visible. To serve what ends Mr Hattie? Is what is left education Mr Hattie? What kind of misbegotten conception is that? Then you ruled out any ‘disparate bits of information’ that were influenced by home background and socio-economic status. What kind of research is that Mr Hattie? You are quite happy to accept massive amounts of ‘disparate bits of information’ derived from highly artificial situations. I can’t comprehend you Mr Hattie. You called as cliché Eysenck’s description of meta-analyses in education as garbage-in-garbage-out – but that is not a cliché Mr Hattie – that is, for us, a living and recurrent horror.

Mr Hattie: the idea of meta-analysis research comes from the medical area, where it is applied to matters that are amenable to such research. Variables are scrupulously and rigorously controlled – you see people’s lives are at stake. But Mr Hattie you have gone even further – your analysis is of the meta-meta variety. The combination for collective meaning is in no way achieved in your research, and your presentation of education defined to decimal point accuracy and ranking absoluteness is a stage show. Yes – I’m sure there are disclaimers, but the overall presentation is trenchant – and I point to your presentation to the New Zealand treasury. If it was medical research, what you have done with your education research wouldn’t be contemplated let alone tolerated, but it seems things can be much looser when it comes to the education of children.

May I point to one of your high effect size influences (0.73), one which has become central to professional development as a result of teachers acting on your research? (This reference is identified in Part 4 and many more like it.) You certainly make ‘meaning out of the collective’ with your interpretation but the meaning is rubbish.  A good part of this influence’s high ranking comes from the inclusion of one meta-analysis that uses music as an education reinforcement and another one that has a high proportion of studies with students who have severe learning and developmental delays. If these meta-analyses are excluded its effect size falls back to something very ordinary. Is this the yellow-brick road for our research future?

You comment how meta-meta-analysis might be changed and make a suggestion (I interpret that as an acknowledgement that your meta-meta analysis could be better, well how true), for instance, that some provision might be made for qualitative research. Oh please Mr Hattie – you are deep in it, don’t try to haul yourself out of it and make things even messier. Your meta-meta analysis is a train wreck being presented as a smoothly interlocking dangan ressha.

I am asking you to admit your visible learning research is faulty beyond recovery made worse by the way it has been interpreted and presented. You should apologise for the damage wrought on classrooms and children’s learning, withdraw your book from the market, and go bury yourself in research that follows the rules as commonly understood.

In New Zealand, five of that country’s top academics, Ivan Snook, John Clark, Richard Harker, Anne-Marie O’Neill, and John O’Neill wrote an article for the New Zealand Journal of Educational Studies sharply criticising Hattie’s research, with a brief summary in the PPTA News. There was a fair amount resulting discussion following the journal article but to no great effect with the quantitatives and qualitative heading off to their respective bunkers, as always happens, to mutter dark thoughts about the others.

The PPTA summary concentrates on Hattie’s findings ‘being appropriated by political and ideological interests and being used in ways which the data does not substantiate.’ There is no doubt about the writers’ concern and what they wanted to say but, in the summary, it is all expressed so diffidently. For those outside academia, as I am, it takes a while to understand that academics do not like to air their feuds in public as it can become consuming and get out of hand, appear undignified.

In the summary, Snook says he ‘did not believe the figures Hattie presented amounted to a “holy grail” for education. He was dubious about the benefits of reducing a complex area like a classroom to a decimal point.’ And he goes on to say ‘Hattie acknowledges the important role of socio-economic status and home background … but chooses to ignore it.’ Snook concludes by saying he ‘hoped the commentary would prompt John Hattie to discuss the issues it raised.’ Forlornly, in the light of what happens, he adds: ‘I would like to see a good debate on these issues.’

In conclusion, I want to pinpoint the interaction that I consider most powerfully supports my portrayal of Hattie’s research as educationally destructive.  Hattie has convinced a fair number of teachers to his conception of school education, but that interaction, in my view, is nowhere the most important; the most important interaction by far is his interaction with politicians and education bureaucrats. Politicians and bureaucrats have found Hattie’s conception of education ready-made for what they understand education to be and to serve their self-interests. In particular, they respond enthusiastically to Hattie’s unproblematic, cut-and-dried, decimal point precision about what works, and what doesn’t. They have gone on to use that conception of education as the philosophical basis for puffing themselves up against teachers, encouraging them to declare that they know; when teachers have cried out that they don’t, that education isn’t like that – the politicians and bureaucrats have oft seemed to reach deep into their souls to exclaim, yes it is, we know, we have been told – you are ill-informed, behind the times, and resistant to change, go away. And in that way justifying imposing their education change on schools, paying little heed to change suggested by teachers, scapegoating them, and excluding teachers from proper representation in decision making.

In such a way has the democratic fabric of our little country been torn.

Advertisements
This entry was posted in Hattie and tagged , , , , , . Bookmark the permalink.

7 Responses to Hattie’s research: Is wrong Part 2

  1. kellyned says:

    Well said Kelvin.
    Hattie has done huge damage to education in two countries now and I am aware that his ‘research’ has made it’s way into the American educational system. Such simplistic (yet complex) attempts to break down education into a series of simple ‘paint by numbers’ stages is ridiculous.
    Unfortunately a superficial reading of it makes it seem plausible.

  2. Helen says:

    Hi Kelvin,
    I’ve only just caught up with these two postings on probably my most pet hate in education.
    You ask why teachers and principals haven’t challenged him. I suspect because it is so profoundly flawed that it is not even worth the time of day.
    Why do some people embrace it? I’ve heard that he is very charismatic in person and wonder if his support is mainly confined to the people who have met him. I spoke to a well-respected Auckland principal (female) who worshipped the guy and I was flabbergasted at how totally blinded she was to the nonsense of his ‘research’.
    I dug deeper into his notorious ‘class size doesn’t matter’ findings based on research that cannot possibly be generalised into a primary school context, for example, one study was about the effects of secondary level class size on future wages. Hardly relevant! Another which researched mathematics and science class size at secondary school is summed up by Hattie as having a dismal overall effect size of d = – 0.04, but the findings of the study actually gave a very different message. It reported extensively on relationships between a number of factors, such as time spent on group and whole class instruction, teacher experience, and use of innovative teaching strategies, yielding wide-ranging effect sizes and concluding that class size can be significant, and suggesting that where teachers are trained to adapt their pedagogical practices to maximise smaller class numbers, there are benefits. Quite obvious really!
    As you indicate, there are numerous disclaimers in his book, but then he goes ahead to propound the nonsense anyway. He points out that by necessity the easily measurable data he uses limits the findings to surface and deep learning rather than conceptual understanding which he recognises is the desired outcome of good teaching, but then he does it anyway!
    On one hand he claims that he never intended for his findings to be used in the way the Ministry is using them, on the other hand he feeds it to them and milks the publicity. If I believed my research had been so blatantly misused, I’m pretty sure I would be taking legal action, or at the very least, defending my work publicly in the most outspoken, extreme terms possible.
    That list in the back ranking teaching strategies is about the most destructive thing I have seen in education and is nothing short of criminal, but it is so easily misconstrued by unsuspecting readers as an authoritative, easily referenced summary of the book’s findings on what does and doesn’t work in the classroom. Indeed, when the book first came out promoted as the ‘holy grail’, that’s exactly what we started doing in our school – without even reading the book we used the list to start questioning and even modifying our practice.
    No self-respecting education researcher would ever create that list!
    Keep up the good work – this ‘research’ has to die.

  3. Kelvin says:

    Fantastic. You don’t have Shadbolt ancestry too, do you? (See Majority of One) What is to come is amazing but it won’t be to you. Sit back and enjoy the ride.

  4. Polly Hamilton says:

    Reblogged this on Miss Hamilton's Flipping Classroom and commented:
    Interesting (and reassuring?) to read some dissenting views about Hattie.

  5. Brian Cambourne says:

    Kelvin,
    Thanks for your critique of Hattie’s research. His application of it to reading research is particularly dubious because he assumes that all the studies he meta-analysed were based on the same operational definitions of ‘effective reading. He fails to mention that how researchers operationally define ‘effective reading behaviour’ determines which variables they control and/or ignore in their research. He conflates research which defines ‘effective reading’ as ‘accurate ‘word recognition, with research that defines it as the ability to accurately pronounce nonsense words, with research that defines effective reading as reading silently and comprehending the author’s intended meaning.
    I remember his presentation at a leadership conference some years ago in Sydney where he arrogantly dismissed what all the miscue research reading showed and promoted his version of about what ‘ effective reading pedagogy’ based on his ‘superior’ data .
    Brian Cambourne

  6. Kelvin says:

    Thanks for that insight Brian. Wonderful to hear from a researcher and curriculum guide of such eminence.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s