Skip to content

How to evaluate area based initiatives: Promise programme

arrow down
anthony-delanoix-CFi7_hCXecU-unsplash

Statistical approach (SMS level 3)

What was the programme and what did it aim to do?

This study evaluates the impact of place-based “Promise” programmes in the United States. Promise programmes guarantee financial aid to attend university or college for eligible students within a particular school district. The initial Kalamazoo Promise paid all fees for four years for students to attend any of Michigan’s state colleges or universities, and was available to all high school graduates from public high schools in Kalamazoo, Michigan. Subsequent Promise programmes varied significantly in terms of programme design. For example, the maximum scholarship amount varied from $600 to $10,000 and the duration from one to five years. There was also variation in eligibility, with some being universal like the original Kalamazoo programme and others introducing requirements such as a minimum GPA score. Promise programmes are intended as a place-making policy that boosts local economic development by attracting and retaining parents with high human capital (who would like their children to benefit from the financial aid).

What’s the evaluation challenge?

Evaluating the economic effects of area-based interventions such as Promise is difficult because areas that receive support are different to those that do not. In this case, Promise programmes targeted the most distressed communities in the United States. As a result of this selection, if we compare differences in school enrolment or house prices for Promise areas to other areas, these differences may not reflect the impact of the programme. Instead, they may simply reflect differences in the type of areas that receive Promise programmes.

What did the evaluation do?

In order to address this issue, the study compares the before-and-after change in school enrolment and house prices for Promise areas (‘treated’ areas) with areas that were similar on pre-intervention observable characteristics (e.g. demographics) but that did not receive Promise support (‘control’ areas). This method is called the difference-in-differences approach..

How good was the evaluation?

According to our  scoring guide , the difference-in-differences method receives a maximum of 3 (out of 5) on the Maryland Scientific Methods Scale. This is because although it does well to control for observable differences (e.g. neighbourhood demographics) between supported and non-supported areas it is unable to control for unobserved differences (e.g. how accessible jobs are from the area) that change with time. The study ensures the control group is as similar as possible to the Promise treatment group by considering only nearby areas and controlling for remaining observable differences in the regression analysis. Furthermore, there is a clearly defined treatment date. Therefore we score the study a 3 out of 5 on the SMS.

What did the evaluation find?

The evaluation finds that Promise programmes led to increases in both school enrolment and house prices. The largest enrolment effects are found for promise programmes that are universal (no merit-based eligibility) and covering a wide range of schools. Restrictions on eligibility and the range of schools reduces the enrolment effect. It also finds that Promise programmes with merit-based eligibility requirements led to a relative increase in school enrolment by white pupils. Finally, house price increases were found to be strongest at the top part of the house price distribution.

What can we learn from this?

The findings from this study suggest that—so long as they are not too restrictive—Promise-type programmes may be able to attract households with children to an area, resulting in an increase in school enrolment. It’s interesting that universal programmes have the biggest impacts, but they will also cost more emphasising the need for a way to compare benefits and costs. This also suggests that elements of the policy design may have distributional consequences. For example, a merit-based eligibility requirement may fail to support disadvantaged households. Further, the fact that the house price capitalisation effects are greatest for homes that are already the most expensive suggests that relatively wealthy households are willing to pay the most to live in Promise areas which may indicate that they are benefiting the most. If such programmes were to be implemented in the UK, the programme should pay particular attention to the ways in which the policy and elements of its design can impact on educational equity as well as address questions of cost-effectiveness.

Reference

LeGower, M., & Walsh, R. (2017). Promise scholarship programs as place-making policy: Evidence from school enrollment and housing prices. Journal of Urban Economics, 101, 74-89.

How to evaluate area based initiatives: Promise programme