Dear Vice-Chancellors

Dear Vice-Chancellors

The posting-letter you will receive, or have received, is a factual story of https://networkonnet.wordpress.com/2018/04/15/hatties-research-is-false-vice-chancellors-asked-to-investigate-part-1/ the Hattie saga typical of my communications to teachers over a number of years, paying special attention to those who bore the erroneous burden of that research – children and teachers.

Below is a link which I received from an Australian educationist in response to that posting which sets out matters more in your terms. 

https://visablelearning.blogspot.com.au/p/class-size.html

From a sidebar on that link, I have printed Professor John O’Neill’s morally courageous letter to the minister of education in the former government. (Professor O’Neill, Head of the Institute of Education Massey University, and recently appointed to the Taskforce Panel to review Tomorrow’s Schools.)

I didn’t ask Professor O’Neill if it was all right with him if I reprinted his letter. He wouldn’t have minded anyway but, nevertheless, I went ahead on my own initiative.

For the general reader, also added to this posting, is the substance of the link, but in more narrative form.

How was the education system to let Hattie happen?

That it was, speaks to teachers needing to have a real say in the education system to which they are the means, children the purpose, and we the servants.

As for Hattie, I am tolerating it no longer and will not let the matter rest.

John A. Lee in the 1940s, holder of the Military Cross, writer of Children of the Poor, and a famously rebellious Labour MP, just as famous Joseph Savage and Peter Fraser at the time, told the story of how, when Ernest Shadbolt my grandfather, was in the corridors of parliament, as he often was on some cause or other – Lee found the nearest office in which to hide. (Apologies for the Shadbolt references in two successive communications, must be my mood.)

I intend to follow the nuisance example of my irascible progenitor.

And my vow that the last posting was my last word on Hattie for some time seems to have wobbled a bit, in fact, collapsed – well, it was some time, but obviously not a long one.

Visiblehat, the Australian educationist referred to, contributed the following letter to the Comment section of Hattie’s research is false: Vice-Chancellors asked to investigate. 

It is included as component part of my case to you.

I am confident it will be useful to you in considering the request to investigate the soundness and authenticity of Hattie’s research.

Regards

Kelvin Smythe

  1. visiblehat says:

April 15, 2018 at 12:45 pm

Thanks Kelvin, the Vice Chancellors must read the 3 Meta-analyses Hattie used for the 2009 version of Visible Learning. These show, Hattie can’t average and gross misrepresentation. I contacted Gene V Glass, the inventor of Meta-analysis and author of the major study Hattie used for Class size, he commented about Hattie’s averaging: ‘Averaging class size reduction effects over a range of reductions makes no sense to me’.

Summaries of the 3 meta-analyses are here: 

https://visablelearning.blogspot.com.au/p/class-size.html

VisibleLearning .

  1. Professor John O’Neill’s letter as set out by visiblehat and is on a sidebar from the link

Professor O’Neill’s full letter can be found here. He sent a copy to Hattie, but I’m not aware of Hattie addressing any of the issues raised – see Hattie’s Defenses.

Professor O’Neill also extended these arguments in a 2012 publication, Material fallacies of education research evidence and public policy advice.

I will quote full sections of Professor O’Neill’s letter pertaining to the quality of Hattie’s synthesis.

Professor O’Neill calls for Hattie to remove inappropriate studies and re-rank influences (p4):

‘Professor Hattie’s research comprised a synthesis of more than 800 meta-analyses relating to achievement. These meta-analyses cover early childhood education, schooling and college (tertiary) level education. It is important to note, therefore, that some of the studies included in (i) the synthesis; (ii) calculations of the average effect size of the studies within a topic category; and (iii) the rank order of effect sizes, are not in fact studies of schooling.

This creates two policy problems. First, the synthesis contains studies that have no proven relevance to the schooling sector and schooling policy decisions; and second, the inclusion of these studies skews the stated average effect size for a particular topic and, as a consequence, its overall position in Professor Hattie’s rank order.

If as Minister of Education you wish to use the Visible Learning synthesis as evidence to inform policy decisions in the schooling sector then I would point out that, minimally, all the studies unrelated to schooling need to be removed and the remaining average effect sizes recalculated and re-ranked.’

Meta-analyses do not uncover the details of what happens in the classroom (p5):

‘The synthesis has no interest in uncovering interaction or mediating effects (e.g. what happens in school classrooms when class sizes are reduced and teachers and learners interact differently, or the curriculum is changed). This is problematic for educators at all levels not least because real classrooms are all about interactions among variables, and their effects. Professor Hattie implicitly acknowledges this shortcoming when he states that ‘a review of non-meta-analytic studies could lead to a richer and more nuanced statement of the evidence’ (p. 255).

He also explicitly acknowledges that when different teaching methods or strategies are used together their combined effects may be much greater than their comparatively small effect measured in isolation (p. 245).

Let me state the basic shortcoming more bluntly. The non-meta-analytic and qualitative or mixed methods studies Professor Hattie has excluded are precisely the research investigations that do make visible not only (a) that class size matters to student achievement, but also (b) what the observed effects of different class sizes are on classroom teaching and learning practices as a whole, and furthermore (c) which sub-groups of students are most materially affected by larger or smaller class sizes and the attendant changes in classroom processes they require.’

Professor O’Neill urges for some quality control of the studies that Hattie uses (p7):

‘While Visible Learning has been described in popular media internationally as ‘teaching’s Holy Grail’, and has anecdotally proved very influential in New Zealand government circles, the method of the synthesis and, consequently, the rank ordering are highly problematic for the teachers and policy makers whose practical decisions it is intended to inform.

The need to scrutinise the references and begin to establish whether the sources used in the synthesis are:

(a) school-specific or should be discarded for the present purpose;

(b) quality assured or not – I discarded unpublished conference papers but retained doctoral theses;

(c) studies of general or specific populations of students such as those with learning disabilities, or of specific learning areas.’

Professor O’Neill’s analysis of some of the research used for particular influences (p8):

‘At the very least, the problems below should give you and your officials pause for thought rather than unquestioningly accepting Professor Hattie’ research at face-value, as appears to have been the case.

(i) The ‘micro-teaching’ influence (average effect size 0.88, rank 4) must be discounted as the synthesis provides no evidence that it has had any effect on school students’ achievement, only on that of pre-service teachers;

(ii) the ‘professional development’ average effect size (0.62, rank 19) should be recalculated as one of the studies discussed provides no evidence of student effects; another cites the general effect size not the lower student achievement effect. Recalculation gives an average effect size of 0.49 and drops the ‘influence’ to 48 in the rank order. This is a considerable difference which both illustrates the overall fragility of the ranking, and suggests extreme caution in its use as a simplistic policy ‘takeaway menu’.

(iii) ‘providing formative evaluation [to teachers]’ (average effect size 0.9, rank 3) is based on two meta-analyses only, both involving students with special educational needs and therefore is not obviously generalisable to all schools, classrooms and teachers;

(iv) similarly ‘comprehensive interventions for learning disabled students’ (average effect size 0.77, rank 7) does not have demonstrated general applicability;

(v) the ‘feedback’ influence (average effect size 0.73, rank 10) is significantly increased by inclusion of one meta-analysis on the use of music as an education reinforcement (effect size 2.87). The meta-analysis contains a high proportion of studies with participants who have severe learning and/or developmental delays, in both school and out of school settings, and includes both adults and children. If this one source is excluded, the average drops to 0.63 (rank 19). (It should be noted that feedback is one of the few teaching influence domains where there is a sufficient number of studies to indicate more clearly which single aspects of feedback are likely to have the most general practical effect on student achievement (e.g. ‘immediacy of teacher feedback’) and which least (e.g. ‘teacher praise’);

(vi) the influence ‘spaced vs. massed practice’ (average effect size 0.71, rank 12) includes two meta-analyses specifically on the learning of motor skills with an average effect size of 0.96. If these are discarded on the grounds that they are not of general relevance to most learning areas of the curriculum, the influence of spaced practice drops to 0.46 (rank 53);

(vii) the general importance and ranking accorded to ‘meta-cognition strategies’ (average effect size 0.69, ranking 13) must also be questioned on the basis that the two meta-analyses both refer to reading interventions only;

(viii) the findings for ‘problem-solving teaching’ (average effect size 0.61, rank 20) are derived from six meta-analyses, three of which are unpublished doctoral studies and one an unpublished conference paper. The average effect size of the two peer-reviewed journal meta-analyses (one in mathematics, the other science) is 0.46 (this would give a reduced rank of 53);

(ix) the commentary (p. 201) on the influence ‘teaching strategies’ (average effect size 0.6, rank 23) lists numerous possible strategies for inclusion in teachers’ pedagogical repertoires but gives no policy or practice guidance on which should be used with which learners, in which subjects, under what conditions and in which sequence or combination, nor for how long or with what frequency. Equally, the author comments that ‘most of these meta-analyses relate to special education or students with learning difficulties’ (p. 200). Their general applicability for all school students has not been demonstrated;

(x) the ranking of ‘co-operative vs. individualistic learning’ (average effect size 0.59, rank 24) must also be recalculated because the studies include one of adults (effect size 0.68) and one unpublished conference paper (effect size 0.88). If these are excluded the average effect size falls to 0.4 (rank 64);

(xi) in contrast, for study skills, (average effect size 0.59, rank 25), if the five college level meta-analyses are excluded, the average effect size of the remaining meta- analyses rises markedly to 0.74 (rank 9);

(xii) finally, for mastery learning (average effect size 0.58, rank 29) the meta-analysis with the largest effect size is an unpublished conference paper. If this is excluded, the average effect size is reduced slightly to 0.55 (rank 35) but even so this reduces its measured effect on student achievement to less than those of the home environment or socio-economic circumstances influences which Professor Hattie says at the outset cannot be influenced in schools.’

  1. The posting from the link by visiblehat, as adapted by me, with charts and diagrams omitted and made more a narrative

Professor Peter Blatchford:

Several highly influential reports which have set in motion a set of messages that have generated a life of their own, separate from the research evidence, and have led to a set of taken for granted assumptions about class size effects.

Given the important influence these (Hattie and others) reports seem to be having in government and regional education policies, they need to be carefully scrutinised in order to be sure about the claims that are made (p. 93).

Hattie later in the same book concedes:

The evidence is reasonably convincing – reducing class size does enhance student achievement (p. 113).

However, in VL and his presentation with Pearson (2015) he seemed to have a different view (he called class size a disaster and a distraction) when he used the following three meta-analyses.

Does Hattie Misrepresent the three studies?

  1. Gene Glass and Mary Lee Smith (1979) investigated a range of comparisons of class sizes of 40 versus 30 to classes of 1 versus 40.

Hattie calculates an average by combing all class size reductions to get a low value of d = 0.09.

This is another Hattie error as the average is 0.25.

But, given that the class size reductions are totally different, the question must be asked what does this average mean?

I contacted Prof Glass to ensure I interpreted his study correctly, he kindly replied:

Averaging class size reduction effects over a range of reductions makes no sense to me.

It’s the curve that counts.

Reductions from 40 to 30 bring about negligible achievement effects. From 20 to 10 is a different story.

But teacher workload and its relationship to class size is what counts in my book.

Bergeron (2017) reiterates:

Hattie computes averages that do not make any sense.

If you look at this meta-analysis in more detail a totally different picture emerges, which is not represented by using this one average (Hattie only uses the one incorrect average).

A key finding from the above graph is the difference between well and poorly controlled studies.

Mary Lee Smith and Gene Glass conclude (p. 15):

The curve for the well-controlled studies then, is probably the best representation of the class-size and achievement relationship.

A clear and strong relationship between class size and achievement has emerged. There is little doubt, that other things being equal, more is learned in smaller classes.

Hattie in a recent interview with Hanne Knudsen (2017) John Hattie: I’m a statistician, I’m not a theoretician said:

If, for example, a meta-analysis came out that showed, for example, that class size had a huge effect on learning, my model is wrong. I worry all the time about falsifiability (p. 7).

Yet, it is ironic that the author of the class size study, Professor Gene Glass, who also invented the meta-analysis methodology, wrote a book contradicting Hattie, ‘50 Myths and Lies That Threaten America’s Public Schools: The Real Crisis in Education‘.

Myth #17: Class size does not matter; reducing class sizes will not result in more learning.

Professor Glass says:

Fiscal conservatives contend, in the face of overwhelming evidence to the contrary, that students learn as well in large classes as in small … So for which students are large classes okay? Only the children of the poor?

Thibault (2017) Is John Hattie’s Visible Learning so visible? also questions Hattie’s method of using one average to represent a range of studies (translation to English):

We are entitled to wonder about the representativeness of such results: by wanting to measure an overall effect for subgroups various with various characteristics, this effect does not faithfully represent any of the subgroups that it encompasses!

By combining all the data as well as the particular context that is associated with each study, we eliminate the specificities of each context, which for many give meaning to the study itself!

  1. McGiverin et al (1989) state that:

 The lack of experimental control and diverse definitions of large and small are among the reasons cited for inconsistent findings regarding class size (p. 49).

In addition, they are critical of the Glass (1979) study for not using pragmatic class sizes. As a result, their study focused on second-year students with properly controlled studies using experimental and control groups (although not randomly assigned). They decided a more pragmatic definition of a large class size is about 26 and a small class size is about 19 (p. 49).

They introduce a caveat by quoting Berger (1981, p. 49):

Focusing on class size alone is like trying to determine the optimal amount of butter in a recipe without knowing the nature of the other ingredients.

Whilst they get a reasonably high d = 0.34 they advise caution in the interpretation of this result (p. 54). Also, they make special mention of the confounding variables – the Hawthorne effect, novelty, and self-fulfilling prophecy.

  1. Goldstein et al (2000):

So once again, the detail of the study is lost when Hattie uses ONE averaged effect size to represent that study.

Hattie’s Interpretation:

In his recent collaboration with Pearson (2015) he names class size as one of the major distractions. In previous presentations, he consistently labelled class size a ‘disaster’ or as ‘going backwards’. (Hattie’s 2005 ACER presentation):

Yet, in another article in 2015 responding to critiques of his work he concludes:

The main message remains, be cautious, interpret in light of the evidence, search for moderators, take care in developing stories.

Using polemic language like ‘distractions’ is not being very cautious!

Yet, in what I think is the most comprehensive peer review of class size so far, Class Size Eastern and Western perspectives (2016), Hattie retreats from the above polemic and concedes:

The evidence is reasonably convincing – reducing class size does enhance student achievement (p. 113).

Hattie then cleverly shifts the debate:

Why is the (positive) effect so small? (p. 105).

One of the answers to that question is pretty obvious when you look at the table above where Hattie derives his lowest effect size of 0.09. When you average very small effect sizes from class sizes of 40 down to 30 with large effect sizes of 20 down to 15 you get a low average.

Prof Adrian Simpson also insightfully explains in ‘The misdirection of public policy: comparing and combining standardised effect sizes‘, that sampling from smaller populations is a major reason why effects of influences such as feedback, meta-cognition, and so on, are high while effects for whole school influences – class size, summer school, and so on, are low (p. 463):

One cannot compare standardised mean differences between sets of studies which tend to use restricted ranges of participants with researcher designed, tightly focussed measures and sets of studies which tend to use a wide range of participants and use standardised tests as measures.

Hattie’s Interpretation Is Used by Politicians for Public Policy:

Hattie’s work has provided school leaders with data that appeal to their administrative pursuits. Eacott (2017, p. 3).

The Australian Government in 2015, used Hattie to block significant funding to redress the socio-economic imbalance in Australian Schools – called the Gonski Review.

Professor Blatchford comments about this:

When Christopher Pyne [the then Australian Education Minister] talked about prioritising teacher quality, rather than reducing class sizes, he set up a false and simplistic dichotomy (p. 16, AEU News).

From New Zealand, a similar example, where Professor John O’Neill writes a significant letter to the NZ Minister of Education on the problem of using Hattie’s research for class size policy.

Further in, Material fallacies of education research evidence and public policy advice, Professor O’Neill states:

The Minister of Education declined to rule out increases in class size. In short, this was because the ‘independent observation’ of Treasury and the research findings of an influential government adviser, Professor John Hattie, were that schooling policy should instead focus on improving the quality of teaching.

Writing about Hattie’s class size research O’Neill warns that:

Much of the terminology is ambiguous and inconsistently used by politicians, officials and academic advisers. The propositions are not demonstrably true – indeed, there is evidence to suggest they are false in crucial respects. The conclusion is, at best, uncertain because it does not take into account confounding evidence that larger classes do adversely affect teaching, learning and student achievement (p. 2).

I am concerned about the unwavering confidence that Hattie displays when he talks about class size, given the caution and reservation that the scholars of each of his 3 studies discuss as well as other reputable scholars around the world. Reservations due to the lack of quality studies, the inability to control variables, the major differences in how achievement is measured, major confounding variables and benchmark effect sizes.

The Largest Analysis and Peer Review of the Class Size Research (so far): Class Size Eastern and Western perspectives (2016), edited by Prof Blatchford et al. Note: Prof Blatchford has a dedicated website to class size research – http://www.classsizeresearch.org.uk

The editors state:

There are in fact relatively few high-quality dedicated studies of class size and this is odd and unfortunate given the public profile of the class size debate and the need for firm evidence based on purposefully designed research fit for purpose (p. 275).

What often gets overlooked in debates about class size is that CSR is not in itself an educational initiative like other interventions with which it is often (and in a sense unfairly) compared, for example, reciprocal teaching, teaching metacognitive strategies, direct instruction and repeated reading programmes; it is just a reduction of the number of pupils in a classroom (p. 276).

Prof Blatchford warns again about correlation studies:

Essentially the problem is the familiar one of mistaking correlation for causality. We cannot conclude that a relationship between class size and academic performance means that one is causally related to the other (p. 94).

The editors conclude:

The chapters in this book are only a start and much more research is needed on ways in which class size is related to other classroom processes. This has implications for research methods: we need more systematic studies, for example, which use systematic classroom observations, but also high-quality multi-method studies, in order to capture these less easily measured factors.

There is some disagreement about which groups are involved but often studies find it is low attaining and disadvantaged students who benefit the most. Blatchford et al (2011) found evidence that smaller classes helped low attaining students at secondary level in terms of classroom engagement. 

Blatchford concludes: 

The aim is move beyond the rather tired debates about whether class size affects pupil performance and instead move things on by developing an integrative framework for better understanding the relationships between class size and teaching, with important practical benefits for education world-wide (p. 102).

Hattie’s contribution to the book (Chapter 7):

Hattie appears to be an outlier in this book. Of the 17 scholars who have contributed to the book ONLY Hattie myopically uses the effect size statistic to fully interpret the research. All the others use contextual and detailed features of the research to reach the conclusion that class size is important and significant.

At least the weight of scholarship has caused Hattie to retreat from his polemic on reducing class size as ‘a disaster’ and ‘going backwards’ and he finally concedes:

The evidence is reasonably convincing – reducing class size does enhance student achievement (p. 113).

But, Hattie cleverly reframes the issue to:

Why is the (positive) effect so small? (p. 105).

Given the significant amount of critique about Hattie’s methodology – the lack of quality studies, the use: of disparate measures of student achievement, of university students or pre-school children, of correlation, the inconsistent definition of small and large class sizes, indiscriminate averaging, benchmark effect sizes, and so on, and so on, I was disappointed that Hattie did not address any of these issues. 

Hattie once again sidesteps the SIGNIFICANT issues raised by Zyngier (+ many others): that is the control of variables – the differing definition of large and small classes. Studies also differ on how to measure class size, some studies use a student/teacher ratio (STR) which includes many non-teaching staff like the principal, welfare staff, library, and so on.

Past research has too often conflated STR with class size (p. 4).

Blatchford, et al (2016), also comment on this STR problem:

They are not a valid measure of the number of pupils in a class at a given moment (p. 95).

Hattie just re-states that meta-analyses provide a reasonably robust estimate and myopically focuses on the effect size statistic. But he provides no defence for the validity issues. However, he concedes STR and class size are different, but he does not resolve the validity issue of using these disparate measures and just fobs off the argument by using a red herring – STR and Class size are related (p112) but he provides no evidence for this claim.

Given the importance of class size research, STR and Class size need to be MORE than just related.

They need to be the SAME!

Hattie includes a 4th study to his effect size average, Shin and Chung (2009) – effect size d = 0.20. But he conveniently does not inform the reader that this study re-analysed the same data (the Tennessee STAR study) as the previous meta-analyses that he used.

Ironically, Shin and Chung warn against creating an effect size from repeated use of the same data:

If a study has multiple effect sizes, the same sample can be repeatedly used. Repeated use of the same sample is, however, a violation of the independent assumption (p. 14).

They also warn:

We found too many Tennessee STAR studies. We worry about the dependence issue (p. 15).

It seems to me Hattie’s strategy is to take the focus off the scrutiny of his evidence and re-direct our attention elsewhere – a strategy for politicians, NOT for researchers!

Teacher Morale:

Blatchford et al (2016), comment on the associated issue of teacher morale and class size:

Virtually all class size studies report that teacher morale is higher in small classes than in larger classes. The personal preference for small classes was demonstrated by STAR third-grade teachers interviewed at the end of the school year. Teachers were asked whether they would prefer a small class with 15 students or a $2,500 salary increase. Seventy percent of all teachers and 81 percent of those who had taught small classes chose the small class option over a salary increase (p. 129).

Prof Gene Glass agrees:

Teacher workload and its relationship to class size is what counts in my book.

Other Commentary

The Australian Education Union has published a comprehensive analysis of the class size research. They summarise that reducing class size does seem to improve student outcomes. Also, they highlight the problems with Hattie’s methodology:

The critics have cited the methodological problem of synthesising a whole range of meta-studies each with their own series of primary studies. There is no quality control separating out the good research studies from the bad ones. The different assumptions, definitions, study conditions and methodologies used by these primary studies mean that Hattie’s meta-analysis of the meta-analyses is a homogenisation which may distort the evidence (comparing apples with oranges) (p. 13).

The 0.21 effect he claims for class size is an average so that some studies may have found a significantly higher effect than that. For example, ‘gold standard’ primary research studies (using randomised scientific methodology) such as the Tennessee STAR project recorded a range of effect sizes including some at 0.62, 0.64 and 0.66, clearly well above the ‘hinge-point’ and the same as most variables which Hattie regards as very important (p. 14).

From Professor John O’Neill’s AMAZING letter. O’Neill quotes from a detailed case/naturalistic study by Blatchford (2011):

Professor Blatchford makes the point that class size effects are ‘multiple’. For children at the beginning of schooling, there are significant potential gains in reading and maths in smaller classes. Children from ethnic minorities and children who start behind their peers benefit most. There is also a positive effect on behaviour, engagement and achievement, particularly for low achievers, where classes are smaller in the lower secondary school (p. 10).

Leading researcher, Professor Dylan Wiliam states that the evidence is pretty clear that if you teach smaller classes you get better results. The problem is smaller classes cost a lot more (7 min into full lecture).

Also, many scholars point out the irony in Hattie’s view, that class size is a distraction – because the number of students in a class limits the ability of teachers to implement the kinds of changes that Hattie shows have the biggest effect, for  formative evaluation, micro teaching, behaviour, feedback, teacher-student relationships, and so on,

For example,  Dr. David Zyngier in his meta-review: 

The strongest hypothesis about why small classes work concerns students’ classroom behaviour. 

Evidence is mounting that students in small classes are more engaged in learning activities, and exhibit less disruptive behaviour (p. 17).

Each of these studies also discusses their limitations. In particular, Goldstein et al (2000) emphasise the issue, that has emerged for all of Hattie’s synthesis:

We have the additional problem that different achievement tests were used in each study, and this will generally introduce further, unknown, variation (p. 403).

Goldstein et al (2003) go into detail about the problems of comparing correlation studies with random controlled experiments:

Correlational studies that … examined relationships between class size and children’s achievements at one point in time, are difficult to interpret because of uncertainties over whether 

Other factors (for example,  non-random allocation of pupils to classes) might confound the results (p. 3).

Goldstein et al (1998) point out another major confounding variable:

There is a tendency for schools to allocate lower achieving children to be in smaller classes. This bias means a considerable number of large cross-sectional studies (correlational) need to be ignored due to validity requirements (p. 256).

Robert Slavin, Best-Evidence Synthesis: An Alternative to Meta-Analytic and Traditional Reviews (1986) also discusses this issue:

 A ‘best evidence synthesis’ of any education policy should encourage decision makers to favour results from studies with high internal and external validity—that is, randomised field trials involving large numbers of students, schools, and districts.

Dr. David Zyngier, has published an excellent meta-review on class size:

Noticeably, of the papers included in this review, only three authors supported the notion that smaller class sizes did not produce better outcomes to justify the expenditure (p. 3).

The highly selective nature of the research supporting current policy advice to both state and federal ministers of education in Australia is based on flawed research. The class size debate should now be more about weighing up the cost-benefit of class size reductions, and how best to achieve the desired outcomes of improved academic achievement for all children, regardless of their background. Further analysis of the cost-benefit of targeted CSR is therefore essential (p. 16).

Recognised in the education research community as the most reliable and valid research on the impact of class size reductions at that time, the Tennessee STAR project was a large series of randomised studies, followed up in Wisconsin by the SAGE project. After four years, it was clear that smaller classes did produce substantial improvement in early learning and cognitive studies, and that the effect of small class size on the achievement of minority children was initially about double that observed for majority children (p. 7).

Zyngier concludes:

Findings suggest that smaller class sizes in the first four years of school can have an important and lasting impact on student achievement, especially for children from culturally, linguistically and economically disenfranchised communities (p. 1).

Professor Ivan Snook et al, in their peer review of Hattie, also comment in detail about class size. They also discuss the STAR study reporting effect sizes did reach 0.66. They conclude:

The point of mentioning these studies is not to ‘prove’ that Hattie is ‘wrong’ but to indicate that drawing policy conclusions about the unimportance of class size would be premature and possibly very damaging to the education of children particularly, young children and lower ability children. A much wider and in depth debate is needed (p. 10).

Dr. Neil Hooley, in his review of Hattie – Making judgments about John Hattie’s effect size talks about the complexity of classrooms and the difficulty of controlling variables, on the issue of class size he says:

‘Under these circumstances, the measure of effect size is highly dubious’ (p. 44).

Dan Haesler has a detailed look at class size and other issues.

Kelvin Smythe gives insight into Hattie and class size: https://kelvinsmythenetworkonnet.wordpress.com/2016/05/03/the-class-size-issue-riposte-from-a-professor-wow/

.

Advertisements
This entry was posted in Academics, Hattie and tagged , , . Bookmark the permalink.

4 Responses to Dear Vice-Chancellors

  1. 111peggyb says:

    Thank you Kelvin. For those of us writing and researching right now you have provided a wonderful resource. How can we have allowed this to happen in our country and why have we remained silent for so so long?
    You inspire me to work harder for the children of New Zealand.
    Arohanui e hoa.

  2. Roger Young says:

    I think we can test this class size debate with a fair test before we get into any research.
    I would suggest that we get a proven effective teacher ( am tempted to say anyone would do perhaps someone like John Hattie would be a good starter) and ask them to lecture their ‘class size’ theories to a lecture hall of say 500 university students.
    Then in the same lecture hall ask the same teacher to teach 500 five year olds to read.
    Research could then use the results to see if class size makes a difference.

  3. Ted Lynch says:

    Thanks Kelvin, you can add some other peer reviews-
    Statistician Prof Bergeron calls Hattie’s results pseudoscience in his peer review – http://mje.mcgill.ca/article/view/9475/7229

    Mathematician Prof Adrian Simpson calls into question the comparing of effect sizes here -The misdirection of public policy: comparing and combining standardised effect sizes – https://www.researchgate.net/publication/312381954_The_misdirection_of_public_policy_comparing_and_combining_standardised_effect_sizes

  4. Susan Bearing says:

    You might be interested that Simpson has just recorded an entertaining podcast about effect size, explaining why Hattie (and others) are just getting it wrong. Listen at http://www.ollielovell.com/errr/adriansimpson/

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.