Best Practice

What works best? A critical approach to evidence-based practice in schools

Evidence-based practice is widespread in our schools and classrooms, but it does not go without its critics. Andrew Jones looks at the pros and cons of different approaches to educational research
Image: Adobe Stock

Over the past 25 years, there has been a growing debate about the nature and quality of educational research and its relationship to practice and policy. This debate was sparked by a lecture given by the sociologist David Hargreaves for the Teacher Training Agency in 1996, in which he questioned whether teaching could be regarded as a research-based profession.

Entitled Teaching as a research-based profession: Possibilities and prospects, Prof Hargreaves argued that the outcomes of educational research had up until that point been disappointing, calling them: “A private, esoteric activity, seen as irrelevant by most practitioners.”

He added that they offered “poor value for money” and pointed out that there were few areas of educational research that were scientifically sound and useful to teachers.

His solution rested in the adoption of a model of evidence-based practice (EBP) from the medical sciences. The basic idea behind the medical model of EBP is that if we do something – X – and if it leads to a desirable outcome – Y – we judge that X works.

Prof Hargreaves’s work was supported by Robert Slavin, who stated that education research should primarily address questions about “what works” (Slavin, 2004).

There are several sources of evidence that can be used to support EBP decisions, if based on the medical sciences, including:

  • Laboratory experiments: Can be used for EBP, but practical and ethical constraints mean that few involve school-age participants. Experiments involve the manipulation of variables to establish cause-and-effect relationships in highly controlled conditions, which then form the basis of further research (see, for example, McDaniel et al, 2007).
  • Randomised controlled trials: RCTs are often considered to be the “gold standard” of evidence (EEF, 2016). RCTs randomly assign participants to either an intervention group or a control group and the subsequent results are then used to determine whether the intervention is effective. These trials can happen “in the field” (see Styles & Torgerson, 2018, for an overview).
  • Systematic reviews: A way of collecting and summarising the results of multiple studies on a particular topic to identify which interventions are most effective. For example, John Hattie’s well-known Visible Learning work (2008). See also Sims et al (2021).

While these forms of evidence can be combined with other types, they tend to be based on quantitative methodologies following scientific principles – something philosophers call “positivism”. Positivism is a paradigm where evidence needs to be scientifically verified or, failing that, dependent on mathematical or logical proof. Paradigms are broad frameworks that define how we view the world.

 

The impact of EBP

Prof Hargreaves's critique of educational research has been influential and, in recent years, advocates of this model of EBP believe the growth of research on educational interventions has made it easier for teachers to access information about what works (Coe & Kime, 2019).

Organisations such as the Education Endowment Foundation (EEF), the Chartered College of Teaching, and ResearchEd have promoted EBP and disseminated findings from researchers whose methods are largely in-line with the “medical model of evidence-based practice”, which has led to significant advances in applying cognitive processes – from cognitive psychology – to education (Weinstein et al, 2018).

The terminology has been tweaked and is now often referred to as “evidence-informed practice” because the effectiveness of teaching strategies can vary depending on context (Neelen & Kirschner, 2020).

They both, nonetheless, rest on the same evidence base and methodological principles. As stated by Perry et al (2021): “The dominant science for informing education practice has been cognitive psychology.”

 

Problems with EBP

However, back in 1996, Prof Hargreaves's critique of educational research was not universally accepted. Some educational researchers argued that his criticisms were unfair and that there was a significant body of research that was valuable to teachers and policy-makers at the time (see Hammersly, 2009).

Furthermore, and considering the medical model of EBP that has become so dominant in our current discourse, Wrigley (2016) argues we need to move beyond the simple view that natural science, experiments and RCTs involve a straightforward process of isolating an independent and a dependent variable while keeping others constant.

For instance, the limitations of research evidence from this model of EBP, including its reliance on insights from cognitive psychology, include:

  • Small sample sizes: Many studies have small sample sizes, which make it difficult to generalise the findings to larger groups of learners.
  • Different demographics: The demographics of the experiment and control groups are often different, which can affect the transferability of the findings.
  • Complex control group comparisons: Many studies use complex control group comparisons, which can make it difficult to determine which factors are causing the changes in learning.
  • Measurement issues: It can be difficult to accurately measure the amount of work set, completion of work and the duration of learning activities.
  • Test scores: The impact of cognitive psychology interventions is occasionally assessed using teacher tests, standardised tests, and grade averages. However, these test scores can be unreliable, especially if they are not externally assessed.
  • Pupil and teacher bias: Pupils and teachers may distort their answers in research studies, which can affect the findings.

Subsequently, it is important to consider the limitations of cognitive psychology (and, increasingly, cognitive neuroscience) when making decisions about teaching practices and that any firm claim of EBP is still treated with caution.

 

What do we mean by evidence anyway?

The ambiguity surrounding what qualifies as evidence also poses a significant challenge to EBP. The current in-vogue view of EBP is often (but not always) centred on positivist and quantitative notions of evidence-gathering and analysis and, in many cases, bypasses other methodologies.

For instance, while it might be enticing to base all our educational interventions on a positivist conceptualisation of EBP, strict adherence to it would potentially rule-out – or at the very least subordinate – other forms of evidence, such as non-numerical (descriptive and interpretive) data that may be used to understand an individual or group’s social reality, including understanding their attitudes, beliefs, and motivation. It would also downplay personal experience.

Despite their weaknesses, there are other methods that can be used to evaluate educational activity, including:

  • High-quality observational studies: These can provide insights into the processes of teaching and learning. However, it is important to be aware of the potential biases of the observers.
  • Case studies: These are in-depth, detailed examinations of a particular issue within a real-world context and can comprise quantitative or qualitative research methods. Of course, representation and replication are an issue.
  • Ethnographic studies: These can provide a detailed understanding of the culture and context of a particular educational setting. However, these studies can be time-consuming and expensive to conduct.
  • Conversation and discourse analytic studies: These can be used to analyse the language used in educational settings. They provide insights into the ways in which power and knowledge are distributed and can be combined with structured, semi-structured, and unstructured interviews as well as focus groups.

Additionally, Gert Biesta (2007) argues that while EBP focuses on questions of effectiveness and efficiency, it does not consider the broader moral and political dimensions of education. He believes that we need to expand our views about the inter-relations among research, policy and practice to keep in view education as a thoroughly moral and political practice that requires continuous democratic contestation and deliberation.

 

Final thoughts

On a personal level, I feel that some of the most popular education writers, bloggers and social media users as well as professional development organisations and training providers (often delivering Department for Education initiatives) have jumped on the EBP bandwagon without fully comprehending the issues and limitations discussed above. While EBP is essential to our practice, I argue that its current anchoring on evidence from cognitive psychology, which is conducive to the so-called medical model, is too narrow and limiting.

To be effective, we need to have a more inclusive and holistic view of EBP incorporating both quantitative and qualitative evidence. It is ironic, as any A level sociology student will tell you, that Prof Hargreaves’ most significant sociological work employed qualitative methods, particularly observation, and not the positivism his conception of EBP demanded (see, for example, Hargreaves et al, 1975).

 

Further information & resources

  • Biesta: Why “what works” won’t work: Evidence‐based practice and the democratic deficit in educational research, Educational Theory (57,1), 2007.
  • Coe & Kime: A (new) manifesto for evidence-based education: Twenty years on, Evidence-Based Education, 2019: https://evidencebased.education/new-manifesto-evidence-based-education/
  • EEF: Blog: Do EEF trials meet the new ‘gold standard’? 2016: https://tinyurl.com/jj6fbynf
  • Hammersley: What is evidence for evidence-based practice? In Evidence-Based Practice: Modernising the knowledge base of social work, Otto et al (eds), Budrich, 2009.
  • Hargreaves, Hestor & Mellor: Deviance in Classrooms, Routledge, 1975.
  • Hattie: Visible Learning: A synthesis of over 800 meta-analyses relating to achievement, Routledge, 2008.
  • McDaniel, Roediger & McDermott: Generalizing test-enhanced learning from the laboratory to the classroom, Psychonomic Bulletin & Review (14), 2007.
  • Neelen & Kirschner: Evidence-Informed, Learning Design: Creating training to improve performance, Kogan Page, 2020.
  • Perry et al: Cognitive science in the classroom, Education Endowment Foundation, 2021: https://tinyurl.com/bddyav6x
  • Sims et al: What are the characteristics of effective teacher professional development that increase pupil achievement? A systematic review and meta-analysis, Education Endowment Foundation, 2021.
  • Slavin: Education research can and must address “what works” questions, Educational Researcher (33), 2004: https://tinyurl.com/44hnant8
  • Styles & Torgerson: Randomised controlled trials (RCTs) in education research: Methodological debates, questions, challenges, Educational Research (60:3), 2018: https://tinyurl.com/2s4e42de
  • Weinstein, Madan & Sumeracki: Teaching the science of learning, Cognitive Research (3, 2), 2018: https://tinyurl.com/yeypds8d
  • Wrigley: Not so simple: The problem with 'evidence-based practice' and the EEF toolkit, Forum, (58,2), 2016: https://tinyurl.com/z99bakmr