Skip to content
Blog

What does evaluation have to do with Local Industrial Strategies?

arrow down
john-salvino-QXgPXa6ydzg-unsplash

Those of you who are familiar with the WWCLEG will know that evaluation is central to the work that we do here. But what, if anything, should evaluation have to do with Local industrial Strategies (LIS)? In our opinion, quite a lot. In fact, evaluation is one of the ten key principles we recommend places bear in mind to develop effective strategies.

What do we mean by evaluation?

To clarify, we are not talking about evaluating the LIS as a whole. By design, LIS will be different across places and consist of different programmes that are implemented across different timelines. Evaluation of the programme overall would be very messy and difficult to draw lessons from. But there is still an opportunity here – we think good evaluation can and should be done at the level of the interventions and the initiatives that will come under the umbrella of LIS.

The government wants LIS to recognise that places are different. They have different strengths and weaknesses and the balance of challenges they face is unique. The nature of these challenges and the ideas that will come to be a part of the solution, on the other hand, are less so. While different places may focus varying degrees of resources on a particular issue, it is almost certain that they won’t be alone in their efforts to address it. For instance, improving progression possibilities for those in low skilled work will be likely a part of all LISs, albeit the balance of focus – classroom based teaching vs. on the job training may vary.

In addition, the guidance for the LIS encourages experimentation so we would hope to see different approaches emerging to solve similar policy problems. This creates an opportunity for improving our understanding of the effectiveness of different tools and innovative policy – as long as we learn from it. This should over time improve the cost-effectiveness of policy even if by simply avoiding the mistakes of the past. This would not be possible, however, if we do not evaluate and do not learn from previous attempts.

When do we evaluate?

The when and the how of evaluation should ideally be thought about and built into the design of the policy. While we can only know the outcomes and draw conclusions after implementation, evaluation by no means should be an afterthought. Consider the case where a new upskilling programme is introduced for those working in the foundational economy and we want to know its impact on outcomes. It would be useful to know what ‘outcomes’ we want to focus on to establish a pre-intervention baseline

It is also important at the outset to ascertain whether evaluation, in that particular context, is going to be feasible and proportionate. Some policy areas lend themselves better to evaluation than others and sometimes it may not be feasible to evaluate.

Even where feasible, it is worth making sure that the cost and the effort of evaluation is proportionate to the size and the expected impact. In a world of competing priorities, evaluation efforts should take into consideration – the cost of the programme, its sphere of influence, the cost of the evaluation and the existing evidence base. Where robust evaluation is unsuitable or unfeasible, assessment should not be abandoned altogether but efforts might better be focused on monitoring and reflection..

Finally, it is not necessary to wait until the ‘end’ of the intervention to learn from evaluation. In fact, it would be useful to build comparative approaches, or checkpoints with sunset clauses, into the policy design. Comparisons allow us to see which approach is working better; a sunset clause would ideally have pre-determined criteria on how we would expect the programme to be performing. These would allow us to abandon a stream of work that is evidently not working as it should or at least make changes to improve the elements that are.

How do we evaluate?

While RCTs are heralded as the gold standard for evaluation, they are not always feasible in a policy context. There are, however, several other methods that can improve understanding and have proven very useful in other contexts. Our thinking on how to go about this is here and information on how we can help here.