Skip to content

How to evaluate employment training: Providing online advice to job seekers at low cost

arrow down
pascal-meier-yyC155b6yHI-unsplash

What was the programme and what did it aim to do?

This study evaluates the impact of a novel unemployment support pilot. Online employment advice was offered to 300 job seekers in Edinburgh with the aim of improving participants’ job search behaviour by increasing the number of applications sent. In turn, it was hoped this would increase the number of interviews secured and ultimately raise job finding rates. Job seekers were recruited from Job Centres and asked to attend job search sessions in a lab once a week for 12 weeks, where they were given computers and a standard web-based job search interface. After three weeks, half of the participants were randomly offered the possibility of using a more sophisticated and tailored interface, which displayed relevant alternative occupations and associated jobs on the basis of readily available labor market data such as labor market transitions and transferable skills. Job seekers were eligible to take part if they were unemployed, over 18 years old and had searched for a job for less than 12 weeks.

What’s the evaluation challenge?

Evaluating the impact of employment advice programs such as these is difficult because job seekers who choose to participate in these programmes tend to be different to those who do not — in ways that are hard to observe or measure. As a result of this selection, if we compare differences in outcomes for job seekers who used a new online tool to job seekers who chose not to, these differences in outcomes would not necessarily reflect the impact of the programme. Instead, they may simply reflect characteristics of job seekers who selected to use the new online tool (e.g. more driven or self-motivated).

What did the evaluation do?

The study deals with these selection problems by implementing the policy as a randomised control trial (RCT): half of the participants were offered the new interface after three weeks and the remaining participants continued with the standard interface. If the randomization is done properly, all the individual characteristics that might influence successful job search should be balanced between the control and treatment group such that on average, the mean estimated difference in performance between the treated and the non-treated can be attributed to the programme. Using the fact that the novel web interface is introduced after three weeks, the evaluation compares the difference in the outcome for the treated before and after that new tool was offered, to the same difference for the control group who were not offered the tool. This method is known as a difference in difference, implemented here with additional randomisation across the groups to help control for anything else that might have driven changes in behaviour.

How good was the evaluation?

According to our scoring guide — www.whatworksgrowth.org/resources/scoring-guide — randomised control trials can achieve the maximum score of five on the Maryland Scientific Methods Scale. This is because randomisation balances out the observable (e.g. experience) and unobservable (e.g. effort) characteristics between treated and non-treated. In this study, the randomization worked satisfactorily: treated and untreated were not statistically different on 31 out of 32 sociodemographic variables that were tested for (the only significant difference being the number of children). This suggests that randomisation worked well for those involved in the pilot, but we still might worry whether results generalise if the sample consists of a group that differs from the general population. The study shows that although survey participants are not significantly different from the unemployed population on most dimensions, women and non-whites were oversampled, and they attended more job interviews. Given that job interviews are a key outcome measure, this might raise concerns over generalisability of the results to the wider population (i.e. ‘external validity’), especially given job interviews are a key outcome measure. Another important challenge in a RCT is attrition, as differential attrition between treatment and control groups could signal that both groups differ in unobservables. The study finds no systematic attrition in terms of observable characteristics. Overall, we score the study at the maximum of five on the SMS.

What did the evaluation find?

The treatment had a small but significant effect on the broadening of job searches. The evaluation finds no overall treatment effect on applications but finds that the number of job interviews increased by 44% (although from a low, self-reported base of on average 0.09 weekly interviews). Effects were stronger for participants who otherwise searched narrowly and had been unemployed for an above-median unemployment duration of 2.5 months. The evaluation also finds that the intervention changed behavior outside the platform: job offers accruing as a result of other job search activities also increase significantly, indicating that some effects of the information intervention had spilled over into other job search activities. One may worry that the increase in interviews is associated with a different quality of interviews, but the study shows that the average wages of jobs interviewed for doesn’t change significantly. Finally, the study finds a negative, but insignificant, effect on job finding rates (although sample sizes are quite small due to attrition).

What can we learn from this?

This study provides robust evidence that a soft, non-coercive intervention in the form of online advice can be a cost-effective way of supporting job seekers to change their job search behaviour, and that this can help secure a small number of additional interviews for the treated group. The cost of programming the highly scalable web tool was only £20,000. However, this study does not provide evidence that the alternative interface increased job finding rates; if anything, the effects were negative, albeit insignificant. To draw stronger conclusions on job finding rates, researchers would need larger sample sizes. Further analysis of the long-term impacts of this tool as well as broader general equilibrium effects would be useful.

How to evaluate employment training: Providing online advice to job seekers at low cost