Is our program REALLY making a difference?
This question is deceptively simple. “Yes, of course…” and we share stories or output measurements like, “200 people attended the course,” or “92% completed the program.”
Unfortunately, the simplicity of the concept belies its complexity in practice. Estimating the real effects of interventions is challenging work. The good news is there are accessible methods and software that can greatly improve how we evaluate our programs.
In this series of posts, we will introduce the concept of counterfactuals and how our organizations can take steps toward improving our measurement of outcomes and impact. Before we get into counterfactuals and the potential outcome framework, let’s use a logic model to clarify terms.
Logic models help define the relationship between our planned work and intended results.
Inputs: Resources needed for program activities.
Activities: Services or components of a program.
Outputs: Direct result of activities.
Outcomes: Intended benefits of activities.
It’s important to distinguish outputs and outcomes because outputs can easily inflate our perceived impact.
During my freshman year of college, I started a weekly meeting in a local prison with men approaching reentry. It was a form of kind-hearted social malpractice that I thought was making a real impact. Why? The room was packed every week! And the feedback of inmates was always positive. The difference between outputs (attendance) and outcomes (benefits) could be revealed with the follow-up question to the outputs, “so that…” “Well, they come to the meetings, so that they can develop a reentry plan, identify housing and job prospects, and strengthen positive relationships.” These were some of the (unmeasured) program outcomes. How many inmates obtain stable housing and employment? Such an outcome measurement could be distinguished by short and long-term estimates.
But do outcomes equate to impact?
No, outcomes alone cannot fully answer the question, “Is your program making a difference?” We could show some convincing pre and post-test data that is statistically significant. However, what outcomes alone cannot tell us is what would have happened if the person did not participate in the program. This is known as the counterfactual.
The impact of a program is simply “what happened” minus “what would’ve happened without it.”
Impact = factual — counterfactual
Since counterfactuals aren’t directly observable, causal inference methods are necessary to try to create the context where we can come as close as possible to observe the unobservable.
Think for a minute about a pre-post test where we calculate a simple difference. We’re running a rapid rehousing program for homeless Veterans and one outcome metric is income. Say Bill comes into our program with monthly income of $800 and when he finishes the program his income is $1000. Can the simple difference of $200 be attributed to the program? In other words, can we equate the simple difference with a treatment effect (impact) of the program?
Treatment effects (impact) are changes in outcomes due to changes in treatment (activities) holding all other variables constant.
This last phrase is really important. In order to answer the question, “Is the program REALLY making a difference?” we need to isolate the effect of the program, which means that a simple difference between pre and post-test scores cannot be the treatment effect. Post-test data could show significant positive change during the time of the program, but how do we know the program is responsible for the change? We need a way to isolate the effects of the program to obtain the real treatment effects.
The gold standard for such an evaluation is a randomized controlled trial (RCT); however, our focus in this series will be social programs where resources are limited and RCT’s aren’t feasible. How can we identify the effects of our programs with limited resources?
In the next few posts, we’ll introduce and illustrate the primary concepts, terms, and methods for measuring social impact.