The Education Endowment Foundation (EEF) recently published a new batch of evaluation reports on peer tutoring, some of which were conducted by specialists from NatCen. We came across some challenging findings: two separate peer tutoring programmes, Shared Maths and Paired Reading, did not show a positive impact on academic attainment, even though previous studies have yielded positive effects.
Robbie Coleman and Peter Henderson from the EEF discussed these findings in their latest blog proposing a number of reasons why this may have been the case. For example, some types of peer tutoring are more effective than others, particularly in certain school levels and settings; how the approaches are implemented affect their impact; and common practice in English schools has changed. All of these reasons could either on their own or in combination explain the fact that no positive impact could be detected on pupils’ outcomes. It is also possible that the lack of impact on attainment could be down to variation in the way the programme was delivered.
However, there is a growing body of research indicating that contextual influences, such as policies, families, schools and local communities, have a tremendous influence on children’s outcomes. So how can we replicate ‘what works’ in order to maximise our chances of improving children’s learning and attainment?
As a bare minimum, those who design programmes such as Shared Maths and Paired Reading must make the project accessible for teachers and practitioners who want to replicate it, clearly setting out basic information such as who the intervention is meant for to how much it costs to implement it. Obviously, creating some intrinsic demand for the intervention among these potential users wouldn’t harm either. When replicating interventions that have been shown to work, it is important to maintain core elements from the original design. For educational interventions, factors such as learning objectives, the number of classroom hours, and types of activities should be kept the same. In addition to maintaining these core elements, it is important to allow teachers and practitioners the flexibility to make the intervention ‘their own’. The key for any successful replication is to identify what is the critical core of the intervention and what are the ‘ingredients’ that are open to adaptation by intervention providers.
The challenge lies in the fact that with many of the interventions that have produced positive results we still don’t know what ‘ingredients’ are key to achieving the desired change. When designing and evaluating interventions, it’s essential to identify those core components but also to collect data and report on a uniform set of performance measures, including measures on reach, dosage, fidelity, partners, training, dissemination, and participant-level outcomes to ensure the best possible results.