We’ve added four more how to evaluate case studies to our resources page. These case studies aim to talk you through existing evaluations to spell out the evaluation challenges and how these studies addressed them.
The latest case studies are from our review of the local economic growth effects of sports and culture. The sports ones are evaluations of: New Wembley and Emirates Stadia and The 1996 Atlanta Olympics.
Clearly such large events and facilities can’t easily be implemented as a randomised controlled trial (our preferred evaluation approach, when feasible). Instead all four make use of the difference-in-difference approach, the focus of today’s blog.
The difference-in-difference method looks at before-and-after changes in an outcome for ‘treatment’ and ‘control’ areas. The treatment effect – that is, the estimate of what changes as a result of the project - is then the difference between these two differences – hence the method’s name.
By comparing against a control group, difference-in-difference helps eliminate changes that would have happened anyway (in the absence of the project), making it a fairly robust method (it scores a 3 out of 5) on the Maryland Scientific Methods Scale that we use to rank evaluations).
One of the major benefits of difference-in-difference is that it is relatively easy to implement. The only real requirement is that you have before and after data for treated and control groups and that you can find some control group that is similar to your treated group (e.g. runner up cities that could have hosted the event).
Overall the method provides a good trade-off between difficulty of implementation and robustness of results, which in many cases makes it a good choice of evaluation technique – especially when explicit randomisation isn’t feasible. If you do decide for a difference-in-difference you can make these case studies your first point of call.