Four lessons from our policy evaluation experiments

Four lessons from our policy evaluation experiments

Meg Kaufman 15th June, 2018

The best way to grow our understanding of what works in local economic growth is to undertake experiments and evaluate their effectiveness. It may sound obvious, but there are a number of reasons why this can be difficult to implement. We want to find ways to make it easier for policymakers, by designing a programme that will overcome some of the most common obstacles.

Taking these kinds of risks is an important element of any industrial strategy, so we thought now would be a good time to establish a new approach and use it to address the Grand Challenges.

We have been working over the last 5 years with a number of partners to trial new policy approaches and evaluate their impact using our rigorous standards. We call these demonstration projects, and although we cannot talk about them while they are underway, we will be reporting on their findings as they finish over the next 18 months.

In the meantime, we have already learned a lot about the obstacles to policy experimentation and evaluation from our failures. We have four demonstration projects underway, but about 20 that we have tried to develop have fallen by the wayside for one reason or another.

Learning from failure is one of my favourite things, second only to the ever-popular learning from success. So here is what we have learned from the demonstrations that failed to get off the ground. We encountered four recurring obstacles:

  • Funding – this affects evaluation efforts in two ways.
  1. First, evaluation should be designed in the early stages of project development. We worked with a few projects in their development stages to design robust evaluations, only to have them fail to secure funding later in the process for one reason or another;
  2. Second, good evaluation adds to project costs, to varying degrees. Currently, there is no dedicated source of funding for it, and not enough incentive to fund it from money that could be used for programme delivery.
  • Ethical concerns – There is still resistance to randomising who receives a treatment, and who goes into a control group. We have written about this before — there are many ways to address these concerns. But it still scuppers many evaluation efforts.
  • Fear of failure – Practitioners are often (rightly) concerned about running an evaluation that shows that their programme did not work the way they had hoped. We think that we need to know if a programme is not working before spending any more money on it. But presenting projects as experiments from the very beginning is one way to protect them from being seen as a failure when they don’t go according to plan. If there is a good evaluation in place then the learning can be shared.
  • Lack of scale – often projects which have come to us for help in setting up an evaluation are too small to provide enough data for a robust evaluation. Scaling a trial across several places can help, but it requires coordination and incentives for participation.

Given the strong emphasis on experimentation in the Industrial Strategy and the ongoing work on the design of the Shared Prosperity fund, now seems an ideal time to think about how such a fund could encourage trials.  

Popular Content

Register for WWG Updates