Menu
 

You are on the Natcen site

Click here for Scotcen

natcen map

You are on the Natcen site

Click here for Scotcen

natcen map
 

Don’t save all the evaluation for the end

Posted on 18 August 2014 .
Tags: evaluations, impact, policy evaluations, Triin Edovald, methods

Triin EdovaldWhether it’s reducing offending rates, tackling child behaviour problems or improving educational attainment, impact evaluations are becoming an increasingly popular among those trying to improve the well-being of individuals, households and communities. More and more, policymakers want to know ‘what’s working’.

Evaluation initiatives are often driven by a need for evidence of effectiveness or value for money. This means that evaluation researchers are often brought along at the very end of the process to help estimate whether a programme or policy  has worked.  

However, evaluations can go a long way towards helping organisations and policymakers improve their programme performance and meet their goals. Particularly if the right information is available at the right time to assist with continuous improvement. Evaluation doesn’t have to be a one-off effort to assess the effectiveness of a completed programme. Instead, it can be seen as a continuous learning process that helps service providers and policymakers show whether and how they are helping their community and/or the wider society over time.

Getting evaluators involved early on does not only improve the design of programmes and policies but increases the chances of being able to carry out a rigorous impact evaluation when the time comes. Ultimately this will lead to a better allocation of funds across programmes and policies.

One of the crucial questions along the journey from innovation to a proven ‘product’ is how to decide when to conduct an impact evaluation. Again, carefully selecting which intervention should beevaluated is something that evaluators can help figure out, while weighing up the strategic relevance of a policy or a programme, the innovativeness of approaches to be tested, existing evidence that this type of intervention works well in a number of different contexts and when to expect the outcomes to show effect.

It’s a job for service providers and evaluators alike. On the one hand, a move towards using evaluation as a tool for dynamic learning requires service providers to actively ask questions such as what types of data can guide their organisation’s direction and how do staff know if they are making progress? On the other hand, the research community has to make a better effort to familiarise organisations trying to improve well-being of various groups of interest with evaluation methods and help them to understand how to integrate continuous improvement efforts into their day-to-day operations.

So there’s some work to be done in bringing together researchers and practitioners to make evaluations more useful. However, the rewards reaped will be high. We need to start thinking about evaluation as a ready-to-hand tool, not just a moment for taking stock.

find out more about our work on evaluations here.

comments powered by Disqus
Blog filters
Year
  • 2020
  • 2019
  • 2018
  • 2017
  • 2016
  • 2015
  • 2014
  • 2013
  • 2012
  • 2011
  • 2010
Month
  • Jan
  • Feb
  • Mar
  • Apr
  • May
  • Jun
  • Jul
  • Aug
  • Sep
  • Oct
  • Nov
  • Dec
NatCen/ScotCen
Clear Filters

Subscribe to the RSS Feed: