The idea for this post came from a tweet sent out by David Henderson a couple of weeks ago.
That sounded like a challenge. So I reached out to David to ask if he would like to collaborate on a cartoon post covering the subject; he graciously accepted.
About the Illustrations
The tips and expertise are courtesy of David. The cartoons were developed by yours truly.
Could you do me a favor? If you like the cartoons, could you connect with me on LinkedIn and endorse me for cartoons?
A few notes:
- If you like the post, write a comment and let me know.
- Share it with colleagues. Seeing people sharing my cartoons inspires me to create more cartoons.
- If you think I’m missing critical pieces to the overall discussion, let me know in the comments.
- Please feel free to use my cartoons in presentations, training materials, etc.
What is a Counterfactual?
Impact is the difference between the outcomes of an individual who participates in a program versus the outcomes for that same individual at the same point in time had they not participated in the program. Since the same person cannot simultaneously be enrolled and not enrolled in a given program at the same time, impact evaluations are concerned with estimating the missing counterfactual, which is an estimate of what would have happened to an individual had they not participated in a program.
In robust evaluations, counterfactuals are estimated by randomly assigning some people to a program treatment group, and others to a control group. For a brief discussion of impact evaluation and the role of counterfactuals in evaluation, see this page by The World Bank.
Developing a Counterfactual
Your program is likely not to blame for a sputtering economy, nor does it deserve credit for an economic upswing. The key to a good evaluation is to try to isolate the effect of your program, irrespective of external factors.
If you don’t acknowledge the imperfections in your data and try to estimate a counterfactual, your program officer will. You’re better off poking holes in your own data before any one else gets the chance.
Results Too Good to Be True
If your results sound too good to be true, you are more likely to have made a mistake in your evaluation than to have discovered you’re a genius.
Outliers do not make for compelling client testimonials. Use your metrics to identify what the average experience in your program looks like, and get testimonials from people who fit this profile.
100% of successful people succeed. Losers are a terrible comparison group for winners. Make sure your comparison group is identical to your treatment group, with the only difference being participation in your program.
What thoughts do you have on the subject of counterfactuals? Is there anything you would like to add to the discussion? Let us know in the comments.