Select Page

Over the summer, I read the rather wonderful book, Rebel Ideas: The Power of Diverse Thinking by Matthew Syed. It’s not a book about education, but its contents are so universal it really should be on everyone’s reading list.

Rebel Ideas uses case studies to highlight how the ability to think differently – whether individually or as a group – can (and has) made a difference in the world and how a lack of diversity of thinking can have potentially catastrophic consequences.

One section of the book entitled Beyond Average I found particularly interesting and it really made me reflect on what we really know about the research we use to inform our teaching.

When I first started teaching primary, there was a much more laissez-faire approach to teaching. You pretty much just got on and did what you wanted. As long as you covered the curriculum over the year and your children made progress, you were left alone.

But over the last decade or so, things have changed dramatically.

Being ‘research-informed’ or ‘evidence-based’ is increasingly expected in UK schools and, on the surface, of course it makes sense.

Of course, we should be looking to evidence, not gut-feelings or experience to tell us what works in the classroom.

And over this time, an industry has grown around keeping schools informed about educational research.

The Education Endowment Foundation (EEF) regularly releases reports which ‘summarise the best available research evidence on a particular aspect of teaching and learning, and present actionable recommendations for practice’.

Ofsted releases Curriculum Research Reviews to share ‘what the evidence tells us about high-quality education’.

researchED holds conferences up and down the country – and internationally – to ‘bridge the gap between research and practice in education’.

Now, I’m not saying that these aren’t useful. I’ve read EEF and Ofsted reports and attended researchED events and they’ve all improved elements of my teaching.

But…

Rebel Ideas tells how Eran Segall, a scientist with a PhD from Stanford, USA, investigated dietary advice. One study he was particularly interested in focussed on whether it was healthier to eat commercially produced white bread or whole-grain sourdough bread.

In the study, participants ate one type of bread for a week, took a two-week break and then ate the other type for a week. Various elements of their health were tested, including blood-sugar levels, inflammation responses and nutrient absorption.

You’d think you could predict the outcome of the study, but you’d probably be wrong.

When the results from Segal’s bread experiment came rolling in, it turned out that the two different breads made no difference when it came to blood-sugar response or any of the other clinical markers. Industrialised white bread and handmade sourdough had virtually the same effect.

Rebel Ideas, Matthew Syed, p. 222 (emphasis added)

Segal was as surprised as I suspect you are.

So he dug deeper.

It turns out that while ‘on average’ there was no difference between the two types of bread, on an individual level, there was an enormous difference for some people.

When considered as a group, one person’s uniqueness was balanced out by someone else’s, leaving a result (and recommendation) that represented (and benefitted) almost none of the individuals.

It made me wonder about the research we rely on in education. The government, trusts, schools and teachers use this research to make policy decisions at the classroom level – such as how learning should be sequenced, how we can help children remember more of what they learn, how to spend pupil premium funding etc.

It made me wonder whether the recommendations of this research are targeted to the ‘average’ student and whether most, if not the vast majority, of our students aren’t represented by the ‘average.?

Let’s look at one example which has always baffled me.

The use of coloured overlays for children with dyslexia and other reading difficulties.

There’s a lot of research around the use of coloured overlays, which all conclude they have no discernible benefit.

As a result, many teachers have stopped using them.

Nessy has helpfully collected links to the research.

However, my personal experience is that overlays can be incredibly useful for some students. I’ve given them out to students who found reading difficult and they’ve told me that the overlays really help. Other students found they didn’t help, so haven’t been given one.

Watch this short clip from Jay Blades’ BBC programme Learning to Read. 

The coloured overlay is clearly having an impact on his ability to see what’s written on the page.

So why does the research evidence say something different from people’s experience?

I feel we should be asking the question about what the outcomes were for individuals in these studies.

Like the bread, was it the ‘average’ outcome that overlays have no impact? Do individual experiences vary greatly?

If so, why are some schools refusing to try them with any students?

While there are disagreements around whether the overlays are supporting dyslexia, other reading difficulties or other visual impairments, to be honest, I’d argue we shouldn’t be too interested in the nuance.

If they help a learner, we should be using them. If they don’t, we shouldn’t.

Look at this tweet…

Here are some of the replies from those who have read the research:

But here were the responses from individuals..

It’s a divisive issue.

My aim of sharing these tweets isn’t to shame anyone or start/continue an argument for/against the use of coloured overlays. 

I’ve used them to simply illustrate the discrepancy between the findings of research and the lived experience for some people.

Every research project has outliers. There are outliers in your school, likely in your class.

Just because something works or doesn’t work for a statistically significant number of people, doesn’t mean that it works/doesn’t work for everybody.

We should be questioning what we’re being told by educational research.

After analysing the bread study, Segal said.

Standardised dietary advice will always be flawed, because it only takes into account the food, and not the person eating it.

I would like to suggest that standardised education advice will always be flawed, because it only takes into account the teaching, not the person learning it.

Sally Michaels

Head of Blue Squid Learning

Sally brings a wide range of experience to Blue Squid from her time as a teacher, SENCO and school leader.

She is passionate about finding a way to make education meaningful for all young people.

Brook Farm
Ellington
Cambridgeshire
PE28 0AE

Company Reg 13997150

VAT No 413499488