Skip to content

How to evaluate employment training: Vocational training in Sweden (statistical approach)

arrow down
s-tsuchiya-WDy8p3745RQ-unsplash

What was the programme and what did it aim to do?

This study assesses vocational training programmes for the unemployed in Sweden during the 1993-97 recession. At the time, Swedish active labour market policy combined passive and active strands: the former supported unemployed people through benefit payments, while the latter involved a range of vocational and non-vocational labour market programmes. The evaluation focuses on the vocational training element, which comprised about 20 per cent of all courses. These courses were primarily aimed at getting unemployed people back into work.

What’s the evaluation challenge?

The evaluation of the impact of active labour market programmes is not straightforward since only specific types of individuals take part. Caseworkers decided which unemployed individuals were put on a course and it may be that they chose individuals with specific observable (e.g. young) or unobservable (e.g. motivated) characteristics. This means that if we compare differences in outcomes for unemployed individuals on the training programme to other unemployed individuals, the differences may not reflect the impact of the programme. Instead, they may simply reflect differences in the type of individuals who participated.

What did the evaluation do?

The study created a control group of unemployed individuals who were not on the programme by matching them to programme participants based on observable characteristics (e.g. age, education, gender). In order to control for unobservable differences, the study made use of a form of randomness in the policy treatment due to the fact that areas with historically low unemployment (in 1991) have more places on the training programme. The study then compared outcomes for individuals who received the treatment simply because they lived in an area with historically low unemployment to matched individuals in other areas. The method they used to do this is called a ‘control function’ approach.

To implement this approach the researchers used two existing datasets. Specifically, they combined information from two national administrative datasets, one covering jobseekers and the other a national labour force survey (covering 1 per cent of the population). This provided rich data on personal characteristics, earnings and unemployment history which were used for matching, as well as area level unemployment (used for the control function).

How good was the evaluation?

According to our scoring guide, a control function that uses a natural source of randomness (an ‘instrument’) scores a maximum of 4 (out of 5) on the Maryland Scientific Methods Scale (Maryland SMS). This is because it does well to control for observable differences (e.g. age) and unobservable differences (e.g. motivation) between supported and unsupported individuals. In order to be well implemented, the instrument (i.e. 1991 unemployment levels) needs to identify a set of individuals who received the treatment but were not different in any other way. This is likely to be the case since the paper demonstrates that historical unemployment in 1991 is not related to employment in 1995 when the outcomes were measured. For this reason we score this method 4 on the Scientific Maryland Scale.

What did the evaluation find?

The evaluation found that participation in training increases employment chances by 7 per cent the following year, but that the more able are most likely to take part. For Swedish-born participants, men, those with high school education and those with young children experienced the biggest impact; for migrants, those from Arab and African countries were least likely to gain.

What can we learn from this?

These findings suggest this training programme had a robust but small effect on employment, even in a recession. But those most in need seem least likely to participate, suggesting a need for active outreach. Programmes may need to be designed to support those least likely to gain. There are questions about long term impacts – the researchers only look at effects 12 months on, but other studies suggest that some training programmes can deliver employment gains several years later.

An evaluation of this type could be replicated in the UK. Similar datasets exist – National Insurance and Jobcentre Plus data – and can be matched together. The DWP can already do this, as could LADs/LEPs if they had access to these datasets. Local evaluation would also need to think about features of UK policy that might complicate things – for example, information that’s held by Work Programme providers rather than Whitehall.

Reference

Andren, T. & Andren, D. (2006). Assessing the employment effects of vocational training using a one-factor model, Applied Economics, 2006, 38, 2469–2486. [Study 127 from our Employment Training review]

How to evaluate employment training: Vocational training in Sweden (statistical approach)