I recently began working toward my master’s degree, and have been struggling to make sense of some of the research studies that I have been reading.  Can you relate?

If, like me, you did not take a statistics course in your undergraduate program, and you have not had the privilege of taking a course on interpreting research yet, it is likely that you, too, are having difficulty with some of the vocabulary, as well as the mathematical system, used in the “Results” section of research articles.

Or, perhaps you are you a teacher who is interested in learning more from educational research?  Maybe you would like research to guide your instruction, but are unsure of how to tell whether the research you read is correctly and appropriately recommending a given practice?

In any case, it is essential to have a basic understanding of the vocabulary used in research studies, and the mathematical terms that are used.  Without this, it is hard to determine what the impact of a given approach might be.

In this post, I will share some of the essential ideas that are common to research articles.   I have simplified the definitions, in the interest of clarity, for use by those who are beginners in interpreting research data.

I am hopeful that this post may lead to some discussion and clarification of terms.  If you have questions yourself about these ideas, or if you can provide some illuminating examples, please do so, in the comment section below.

How can you tell whether the intervention or practice that is described in a research study has made a positive difference?  What do you need to know, in order to look at the results critically?  According to Fisher, Frey and Hattie (2016), it is important to ask whether the story, that the research article tells, is convincing (p. 7).  To answer this question, we first need to be able to make sense of the data.

John Hattie, whose mission is to inform teachers of the practices that work best in education, has come to his conclusions through an incredibly large review of educational research (Fisher, Frey & Hattie, 2016).  In fact, his work is based on over 800 meta-analyses, conducted by researchers all over the world, and includes over 50 000 individual studies, and over 250 million students!  His review of the research is considered to be “the most comprehensive review of literature ever conducted” (Fisher, Frey & Hattie, 2016, p. 4)

This brings us to our first term: Meta-Analysis.  A Meta-Analysis is when the findings from many, many studies on the same topic are analyzed, to determine if there are any patterns or trends across the studies. They use this information to change and “inform practice” (Fisher, Frey & Hattie, 2016, p. 5).  If you find a research article that is a meta-analysis, this indicates a broad swath of information has been summarized, and as such, the conclusions arrived at, would be even more reliable than the individual, associated research studies, that were analyzed in the study.

You likely have heard it said that research can prove anything.  Why does it seem so easy for people to claim that their practice is supported by research?  If everything works, why bother to look closely at the research that exists (Shanahan, 2019)?  It turns out that the answer lies in understanding our next research term:  Effect Size.

Effect Size refers to how we quantify the difference that was seen, as a result of the intervention or practice.  For example, in studying a practice to see whether it improves reading abilities in students, researchers measure the students’ performance before the study begins, and then after a number of lessons have been taught.  They then compare the difference between the scores, at the beginning and at the end of the study (Fisher, Frey & Hattie, 2016, p. 7).

Effect size can also compare two different groups of students, one which received the intervention, and one that did not.  The abilities of the students at the end of the intervention, is then compared to see what the difference was, in terms of growth, between the two groups (Fisher, Frey & Hattie, 2016, p. 7).  Researchers may also draw conclusions as to any underlying or apparent reasons for these differences, in addition to the specific intervention.

Effect size is shown as a decimal.  A very clear explanation of effect size can be found in the book Visible Learning for Literacy, by Fisher, Frey and Hattie (2016).  I have simplified the information from this book even further, to assist those who are quite new to reading educational research.

Effect size tells us whether the difference between the students in both groups, or the difference in growth across the study, was large or small.  It is shown numerically, as follows:

  • d= difference
  • d= 0.0  This means that there was no change or growth
  • d= 0.2  This means that there was a small improvement
  • d= 0.4  Medium improvement
  • d= 0.6 Large improvement  (Hattie, as cited in Fisher, Frey and Hattie, 2016, p. 7)

Fisher, Frey and Hattie explain that if the number is above 0.0, the practice is said to have had a Positive Effect  (p. 8).  When research studies tell us that there was a “Positive Effect“, what they are saying is only that the effect was above zero.  However, that does not mean that we should decide to invest money and time in the practice, solely on that fact.   It is necessary to find out how large the effect was .  “It turns out that 95%+ of the influences that we use in schools have a positive effect; that is, the effect size of nearly everything we do is above zero…If you set the bar at showing any growth above zero, it is indeed hard to find programs and practices that don’t work” (Fisher, Frey and Hattie, 2016, p. 9).

Alright then, the next question to ask is, “What is an acceptable effect size?”

John Hattie sets the “bar of acceptability” at 0.4, and calls this the “hinge point” ((Fisher, Frey and Hattie, 2016, pp. 8-9).  This is the number we need to look for when reading research.  A number of 0.4, or higher, is good.  A number of 1.0 is a very, very good.  In fact, an effect size of 1.0 would indicate a very noticeable improvement, or an advancement that is “large, blatantly obvious and grossly perceptible” (Cohen as cited in Fisher, Frey and Hattie, 2016, p. 7).

As I’ve said above, the definitions I am providing here are very simplified.  There is much more complexity involved, that is outside the scope of this blog post, and at this point in time, beyond my own familiarity and conception.  To give us an idea of the complexity involved, Fisher, Frey and Hattie (2016) point out that one must look closely at a particular situation, when interpreting the effect size (p.  7).  If a practice has a lower effect size (0.2 or 0.3), there are times when it might still be a good idea to try it with students.  Factors that one might consider when determining whether or not to try a practice that has low effect, would be the financial cost of such a practice, and the difficulty involved in its use.  If the cost of using a specific practice is so low, and the act of putting it into place with students would be especially easy and quick to do, it might make good sense to try it.

When reading about effect size in Visible Learning for Literacy,  (Fisher, Frey and Hattie, 2016), I was happy to have some clarification on a point that I brought up in an earlier post.  In “A Sobering Reality” I had mentioned that a previous professor of mine, Dr. Freeze, had explained, that when determining whether or not to continue an intervention, we must check to see how much growth has occurred over the course of a year (Busch, February 24th, 2020).  If the students involved have made less than a year’s progress over the course of a year, it is good practice to end the intervention, and to try something else (Rick Freeze, personal communication to author, February 23, 2020).

Why is this mention of “one year’s growth” so important, and where does it come from?  Fisher, Frey and Hattie (2016) explain that “students naturally mature and develop over the course of a year, and thus actions, activities, and interventions that teachers use should extend learning beyond what a student can achieve by simply attending school for a year” (italics in original, p. 8).  What we need to look for when determining next steps, is whether the results show improvement above the effect of the natural growth that occurs.

After looking at the effect size, the next step is to determine whether the results were Statistically Significant.  Effect size and the size of the study (or how many participants there were) are combined to determine whether the practice would have a strong impact on student learning.  Hattie and colleagues (2016) even provide a formula to determine this:  “Significance = Effect size x Study size” (p. 7).  If you are even a little familiar with research studies, you have likely heard that the larger the study (that is, the higher the number of participants in the study), the more it can be trusted.  Alternatively, however, it is possible for a study to have a very large number of participants, but a tiny effect size.  Therefore, these two factors must be considered together (p. 7).

For more information on the nuances or finer points of the terms defined here, I recommend Chapter 1, in Visible Learning for Literacy:  Implementing Practices That Work Best to Accelerate Student Learning (Fisher, Frey and Hattie, 2016)

I would like to invite you to please share any clarifications of the terms I have introduced in this post, in the comments section below.  I would especially like to invite you to contribute examples that explain the interplay of effect size and study size on statistical significance, if you happen to be well versed in educational research, yourself.

Also, I invite you to comment below if you find that I have misinterpreted or mis-communicated the concepts described above, keeping in mind that the goal here is to present as clear and simple an explanation as possible.  We all become stronger through collaboration and discussion, and I thank you in advance for your ideas!

 

References

Fisher, D., Frey, N, & Hattie, J. (2016).  Visible learning for literacy:  Implementing the practices that work best to accelerate student learning.  Corwin.

Shanahan, T.  (2019).  I’m a Terrific Reading Teacher, Why Should I Follow the Research?  Reading Rockets. Retrieved March 1, 2020 from https://www.readingrockets.org/blogs/shanahan-literacy/im-terrific-reading-teacher-why-should-i-follow-research