Skip to content

How to evaluate apprenticeships: Training disadvantaged youth in Germany

arrow down
timon-studler-ABGaVhJxwDQ-unsplash (1)

What was the programme and what did it aim to do?

This study evaluates a pilot project to support firms in training disadvantaged youth, who would not otherwise have been hired as apprentices. The project was run by the employer association of the metal and electronic industry in the German state of Baden-Württemberg between 2010 and 2015. The employer association funded external social workers to provide social-skills training for apprentices, to compensate for a perceived lack of family support. The programme was mainly a response to an excess demand for apprentices in the region, due to demographic change and an increasing share of school leavers bound for an academic (rather than vocational) education. The programme thus aimed to help employers fill vacant apprenticeship positions.

What’s the evaluation challenge?

Evaluating programmes such as this is challenging because only specific types of firms participate. In particular, firms could have been more likely to participate precisely because they were already facing greater difficulties in filling vacancies. In this case, even if the programme was partially effective, the vacancy rate could still be higher for participating firms after the programme was implemented. Overall, this means that comparing the vacancy rates between participating and non-participating firms may not reveal the effect of the programme, but rather differences in the type of firm who participates.

What did the evaluation do?

In order to deal with this problem, the study conducts a difference-in-differences analysis. This involves selecting a control group of firms which did not participate in the programme but, given their observable characteristics, should have had similar trends in hiring difficulties. The authors compared changes in unfilled apprenticeships for participating against control group firms to estimate the effect of the programme.

How good was the evaluation?

According to our scoring guide, difference-in-differences can achieve a maximum score of 3 (out of 5) on the Maryland Scientific Methods Scale (SMS). This is because difference-in-differences does a good job of controlling for selection based on firm observable characteristics but cannot deal with selection based on unobserved characteristics. A successful implementation of difference-in-differences requires that participating firms would have developed similarly to non-participants, had they not participated in the programme. Since the study constructs a control group that is as similar as possible to the treatment group, given the data available, we score the evaluation 3 on the SMS.

What did the evaluation find?

The evaluation found the programme had no effect on the number of unfilled apprenticeship vacancies for participating firms. The study checked the robustness of these findings by using alternative outcomes variables: the share of apprenticeships that went unfilled, and a dummy variable for whether or not there were any unfilled apprenticeships. The programme had no effect on either of these alternative outcomes either.

What can we learn from this?

In this case study, training support for hard-to-place youth was not an effective tool to help firms meet excess demand for apprentices. We cannot tell with certainty from this evaluation why this was the case. However, the study provides some clues obtained from interviews with the training firms and social workers involved. These indicated that firms did not view the programme primarily as a means to fill vacancies. Instead, some firms may have participated on the basis of social responsibility considerations, while other firms may have participated with the intention of filling apprenticeship vacancies but failed to reach that aim. Therefore, for programmes like this to be successful it may be beneficial to ensure that they are targeted to the second type of firm – that is towards firms having genuine difficulties in apprentice hiring.

Reference

Mohrenweiser, J. and Pfeiffer, F. (2014). Coaching disadvantaged young people: Evidence from firm level data. ZEW Discussion Paper No. 14-054 pp. 27 [Study 987 from our Apprenticeships Review, available here]

How to evaluate apprenticeships: Training disadvantaged youth in Germany