Skip to content
Blog

How to Evaluate – How long?

arrow down
erik-eastman–6zFVL4YuaM-unsplash

In my last post in this How to Evaluate series I considered some of the issues around collecting data. One related issue that I didn’t discuss in that post is the question of when to evaluate?

In an ideal world, we’d want to think about when policy effects were likely to occur – and evaluate accordingly. For example, if the aim of a training programme is simply to get people into work it might be appropriate to evaluate shortly after the programme was completed. But if we’re interested in whether the programme leads to long-run employment, we might want to evaluate outcomes a year, or more, after programme completion.

In practice, unfortunately, there are often political pressures to provide early evidence on impact. It’s easy to criticise this impatience and to call for politicians to take a longer term view. But I think it’s important to try to move beyond this typical reaction – for two reasons. First, because experience suggests that appeals to take the long-term view will often fall on deaf ears. We don’t want the case for better evaluation to stand or fall on an argument we can’t win. Second, if we change our thinking to see evaluation as something that is embedded in the policy design process then the politicians may be right to insist on early evidence of impact (even if they are doing it for the wrong reasons).

In medical trials, for example, it is good practice for the trial protocol to describe the procedure regarding decisions on discontinuation of the trial. With that parallel in mind, and given the political realities, it is important to think about ways in which we could get some short-term indication of programme effects (possibly from bespoke survey data, or from secondary data that provides imperfect indicators) that will help inform policy development while waiting for secondary data to become available.

To give an example, let’s imagine that a LEP wants to provide information to Further Education students on the labour market outcomes they can expect if they take different types of courses. The idea being that telling them about, say, the high wages they will earn as an engineer might encourage them to take courses that lead to that career. The LEP decides to pursue a randomised control trial to evaluate this idea, before rolling out across all their colleges. To do this, they randomly send some new FE entrants information on likely labour market outcomes shortly before they choose their courses (there are issues with this very simple design, but we can ignore them for the purposes of today’s post). The long-run aim of this policy is to increase the number of people employed as engineers. But there is plenty of opportunity for using short run indicators to see whether these long run effects are likely to be realised. FE colleges will have information on course choices, on completion rates, on exam outcomes and on first destinations. Using this information allows indicative evaluation within a few weeks (course choices), a few months (completion/results) and around a year (first destination for a one year course). If we see no movement on any of these outcomes, we might want to avoid wasting further money on this project next year (as well as avoiding the costs of the bespoke survey planned to pick up career destinations two years on from graduation). Of course, if these indicators are moving in the right direction, that may well justify continuation of the pilot and the costs of following up with a longer time scale.

The crucial point is that we need to be thinking about how the evaluation will be used to feedback into policy design. If piloting is truly embedded in the policy design process then we may want to make compromises on suitability to allow early evaluation based on readily available data. Of course, the long-run follow-up that looks at the ultimate objective is still vital to assessing the long-run effectiveness of the policy. But looking at short-run indicators can provide important information that can help determine whether to keep going – a decision which has implications for both programme and evaluation costs. Once again, this discussion only serves to highlight the importance of starting early and embedding evaluation into the policy design process.