Blog
·

How to Read Educational Data When Choosing an Intervention Programme

Infographic of a table covered with colourful graphs and a pie chart

How to Read Educational Data When Choosing an Intervention Programme

Before committing to any new school initiative, it’s important to demonstrate due diligence and research your prospective purchase. This is of course especially important for school leaders when we consider the current financial constraints on UK schools. School leaders will endeavour to evaluate the efficacy of any new external agency, but how do they?

How do you approach the process of due diligence when assessing potential programmes, training providers or other external agencies? Scrutinising data can be a daunting task and when presented with a myriad of figures and statistical information, it can be hard to see the wood for the trees.

It can be difficult to directly compare different programmes since most present their claims in different forms. So, it can be a challenge to make an informed decision on the best option for your school, students and staff.

A new intervention or school initiative is an investment; it’s an investment in monetary terms but it’s also an investment of your time and energy, in seeing it integrated successfully in school. As well as the time and energies of the nominated staff who will be ultimately responsible for running the programme. Therefore, schools need to be confident that they’re making the right investment and reassured by the evidence made available that the decision they’ve made is the right one.

Deciding on the right intervention is an important decision and often a pricey one so listed below are a few things to explore when considering providers.

Data

Empirical data should never be hidden; evidence should be front and centre. The measure of impact should feature in a company’s key messaging. Any provider worth its salt should clearly state the progress demonstrated by students enrolled on their programme. So be wary of any ambiguous claims, vague references as to the benefits and general comments pertaining to improvement.

Testimonials from customers are great and can add another layer of confidence when weighing up a provider but it should never be a substitute for empirical data. Impact evidence should be something a company wants to shout about and shouldn’t be something you have to hunt out on the footer of a webpage.

How does it impact on students? By how much?

In what timeframe?

School leaders should be presented with data which clearly illustrates the impact of the intervention. Ideally, there should also be longitudinal data which demonstrates sustainability. Providers should be able to demonstrate that any impact observed immediately following the programme is sustained over a long period of time.

This data needs to be independently evaluated by a reputable body and of a sufficient sample size to ensure validity of any statistical analysis. Again, be wary of any self-led analysis.

Evidence of impact

Developmentally, a student who does not require intervention, should make month for month progress. So, once we have identified a student who requires support, we must intervene with targeted intervention to bring them level with their peers as quickly as possible. Intervention must be specialised to the specific need and act as a fire break, it should be a short-term solution.

Successful interventions ought to result in the student being withdrawn from the classroom for the least amount of time possible. If the support implemented for students is ongoing with no set end date, this is not intervention, it is simply a programme of study.

Ratio gains are an important way to evaluate the effectiveness of an intervention and something all school leaders need to investigate.

Imagine you’re comparing two external intervention providers, Programme A boasts average reading age gains of 2.5 years and Programme B, a more modest 14 months.

Which one would you choose?

Programme A, right?

A gain of 2.5 years sounds like the right decision, and it would be IF the duration of the two programmes were comparable. A gain of 2.5 years in say 4 months compared to a gain of 14 months in 4 months would make Programme A the better choice. However, if Programme A was in fact 2.5 years gain following 24 months of intervention and Programme B was 14 months following 2 months of intervention then we might make a different choice.

To better compare the two interventions, we should consider the ratio gains of each intervention. The ratio gain is a calculation that takes the number of months gain divided by the number of months of intervention received. The result of which can be considered on a range of effectiveness:

1-2 can be considered as modest

2-3 can be considered as useful

3-4 can be considered substantial

4+ can be considered as remarkable

So, Programme A: 30 months ÷ 24 months = 1.25

Programme B: 14 months ÷ 2 months = 7

Programme A could be considered to have modest impact whereas Programme B has remarkable impact. After considering ratio gains, Programme B would be the better choice.

Where we see measure of impact in promotional material or data evaluations, we often see effect size quoted too. Effect size is important to factor into your deliberation alongside ratio gains as this tells you how meaningful the relationship is between variables or the difference between groups. It indicates the practical significance of a research outcome. A large effect size means that research findings have practical significance, while a small effect size indicates limited practical applications.

To prove this we can use Cohen and Pearson’s theory on effect sizes. Effect sizes can be categorised into small, medium, or large according to Cohen’s criteria. Cohen’s criteria for small, medium, and large effects differ based on the effect size measurement used.

Cohen’s d can take on any number between 0 and infinity, while Pearson’s r ranges between -1 and 1.

In general, the greater the Cohen’s d, the larger the effect size. For Pearson’s r, the closer the value is to 0, the smaller the effect size. A value closer to -1 or 1 indicates a higher effect size.

Pearson’s r also tells you something about the direction of the relationship:

  • A positive value (e.g., 0.7) means both variables either increase or decrease together.
  • A negative value (e.g., -0.7) means one variable increases as the other one decreases (or vice versa).

Financial viability

When looking at the delivery model of an intervention, consider staffing costs. What is the student to teacher ratio? Project the number of students that a member of staff can work with over an academic year to calculate the equivalent price per head. An intervention based on a 1:1 teaching ratio could be the costliest intervention when you factor in the required duration of the programme.

Interventions should also clearly state how much tuition is required for the intervention to be the most effective. This is something else that must be scrutinised as it has a direct effect on the true cost of a programme. If it’s suggested that there needs to be 20-30 minutes a day, this could be easy to accommodate around curriculum entitlement, but how long is this required? This could be a significant commitment over the entire course of the programme.

Man or machine?

Computer-based interventions using AI have their advantages; you can have any number of students going through at the same time and you only need to consider staffing required to police the intervention, rather than deliver it, as students can largely self-administer.

The draw backs to this, in my opinion far outweigh the benefits. Where an external provider offers staff training, you are left with a legacy in school. Remove a computer-based intervention overnight and you are back where you were pre-purchase. But remove an intervention which has involved staff training and you are left with a legacy; your staff still retain the specialist training and still have those acquired skills in their teacher toolkit. With teacher-led interventions members of staff are specialised.

The most successful intervention is one that is not running parallel to the curriculum or a bolt on to mainstream practice. If learning and strategies are allowed to cascade within the school, we are creating a potential for systematic change in teaching practice. It is unlikely that a computer-based intervention, which does not require teacher input, could act a such a catalyst for change.

But let’s be clear - no intervention can be successful and be expected to make the desired impact if it is not delivered with fidelity. A teacher-led intervention comes with its own draw backs, the right staff members need to be nominated for training. This is something that is often overlooked. Selecting the right member of staff is not just a case of identifying staff with capacity within their timetable, but it’s about their ability to deliver the programme as prescribed and ensure it remains robust.

Assessment

You will likely have an assessment tool in school which will allow you to identify your waves of intervention. This is something discussed in The Vocabulary Detectives Assessed, Identified. Now What? podcast episode. You will of course, want to see the impact of the intervention within this assessment matrix but there needs to be an assessment tool to bookend the intervention. Ideally there should be a two-week window in which the pre and post-tests are conducted around the intervention period. This assessment tool should be an independent, standardised and of course not a test designed by the programme developers.

Deciding on the right intervention or external provider is difficult decision but it’s about asking the right questions and critiquing the evidence.

What measurable impact is there?

How is this impact measured and evaluated?

How long before you see this impact?

What’s the ratio gain?

What’s the effect size?

Find the answers to those and you will find your ideal intervention programme.

If you feel like you want to learn more about our intervention programmes check them out here or if you’re looking for advice, get in touch.