In my first Why Evaluate post I asked for a little help coming up with more reasons. You came through in a big way, with a string of comments on my post and on LinkedIn/Twitter.
Here is a set inspired by the post comments. I’ll have more based on the Social Media replies coming soon.
You want to stop those hating on your intervention
Susan Kistler started off the comments with a little nerdcore rap (below the cartoon). There is a lot there, but here is one cartoon it inspired.
Why Evaluate?
You want to understand impacts on your population
You want causation not just correlation
You want to tease apart the contribution
You want actionable information
When I say “just do it” you say “evaluation”
When I say “just do it” you say “evaluation”
Just do it! Evaluation
Just do it! Evaluation
You want to stop those hating on your intervention
You want ultimate vindication
You want people to throw money in your direction
You want to counter misinformation
When I say “just do it” you say “evaluation”
When I say “just do it” you say “evaluation”
Just do it! Evaluation
Just do it! Evaluation
Sophie Alvarez joined in and added one last verse:
You want to know what it entails: dedication
A lot of patience, and documentation,
A clear mind to set up data collection,
To analyze it and to present the final equation
Tease apart the contribution
Stephen J. Gill took off from Susan’s rap.
I would add two “reasons” to the list.
One is implied in Susan’s “tease apart the contribution”. We evaluate to find out what it was that affected the results. It might be the program being evaluated, but most often it is a combination of factors including systems factors that facilitated or were barriers to achieving the intended results. This includes unintended changes in the program and unintended consequences of the program.
Also, we evaluate to reinforce the intended impact of the program. When we ask questions and observe people, we affect the results which can be a good thing for sustainability of the program and its outcomes.
Program sustainability
And the chain continued, Stephen’s comments resonated with Sheila B. Robinson.
Stephen pretty much captured what was on my mind in his last two sentences: “we evaluate to reinforce the intended impact of the program. When we ask questions and observe people, we affect the results which can be a good thing for sustainability of the program and its outcomes.”
I’m often thinking not only of accountability, but of sustainability when I evaluate our school-based programs, and go well beyond the required or prescribed evaluation plan (especially with regard to data collection) to be prepared to report to any potential audience that could be involved in decision-making for future programming.
Pilot Testing
Melanie Wasserman jumped in to add a couple more reasons.
These all seem to relate to final/summative evaluations. Here are some reasons to evaluate your project in its early stages (pilot test) and mid-stream:
* Pilot test: to make sure your intervention has a chance of working; and/or to fine-tune your intervention
* Process evaluation: To document your efforts (e.g., we trained this many people, we distributed this many school lunches, etc.); to help you tell your story
Sharing your awesome work
Short and sweet comment by Clare…
To get the word out about the awesome work you do – with real documentation.
Leave a Reply