Why “What’s the best tool to measure the effectiveness of X?” is totally the wrong question
“What are the best tools to measure the effectiveness of [insert any program, policy, or initiative]?”
It’s a classic case of thinking evaluation is merely measurement, and measurement gives you the answers.
Many managers and non-evaluators think like this – that evaluation is merely a process of picking a few indicators and measuring them, and somehow the value or effectiveness of the evaluand (program, policy, initiative, etc) will be miraculously self-evident.
The reality is that many evaluators think like this too. If they are not of the “indicators” mindset, it’s so often a conversation about methods (qual, quant), or tools, or instruments.
Consider this quote (out of the latest NDE, #133) from Michael Quinn Patton:
… Moreover, and this is critically important, [Scriven] shows that valuing is fundamentally about reasoning and critical thinking.
Evaluation as a field has become methodologically manic-obsessive.
Too many of us, and those who commission us, think that it’s all about methods.
It’s all about reasoning.
(p. 105; quote broken into paragraphs for readability)
MQP makes the above comment in the latest brilliant edition of New Directions for Evaluation (#133, on Valuing, edited by George Julnes).
A great read from cover to cover no matter what, where, or how you evaluate; this issue goes right to the heart of one of the fundamental elements of genuine evaluation — values.
So, values and “valuing” are what makes evaluation, well, e-valu-ation.
And reasoning and critical thinking are central to “valuing”.
Evaluation, then, is about the use of evaluative reasoning and critical thinking to draw values-based conclusions based on the evidence and the definitions of “quality” and “value”. [These definitions were themselves developed with evidence-informed reasoning.]
Evidence is gathered using various methods and – yes – tools.
The evidence is but one ingredient in the evaluation; the evaluative reasoning is how we determine what evidence to gather in the first place, and how to interpret it once we have.
Once we get a grip on this fundamental reality, it’s amazing how it can change the whole conversation and steer it in the direction of more genuine evaluation.
Instead of leaping to tools and instruments, we come right back to the big picture purpose of the evaluation, its intended users, and what they intend to (or, could potentially) use the evaluation for:
- Who asked for this evaluation and why?
- Who are the people who need answers, to what questions, for what purposes, and to inform what thinking or decision making?
At this point the reasoning elements are key:
- How should “effectiveness” (or quality, performance, value, etc) be defined in this context? Based on what? And, whose expertise (including that local knowledge from recipients and community members) do we need in order to get this right?
- Which outcomes should be considered valuable, and are some of them more valuable or important than the others? Why? Based on what?
- How big an impact would be “enough” given the investment of time, effort, and resources that went into this?
… and it’s only then that the methods or tools even become relevant:
- What mix of evidence would be convincing when answering those questions?
- What data sources or tools might we use to capture that evidence?
Once the evidence is in, that doesn’t mean the answers to those big picture questions have miraculously materialized.
No; the evaluation team’s job is far from done. There’s more evaluative reasoning and critical thinking involved in making sense of the evidence:
- What performance picture does the evidence paint, when we look at it alongside our earlier reasoning about what “good” or “effective” should look like?
- Who might disagree with our interpretation, and what evidence or reasoning would they offer to support their conclusions? How would we know if they were right? How can we check?
- What’s the most powerful and compelling way to present our findings in a way that (1) engages our audiences; (2) conveys the most important insights clearly and quickly; (3) provokes genuine evaluative thinking about what it all means and what the organization should consider doing next.
What are some strategies that have worked – or failed! – for you when trying to get either evaluators or clients to understand the importance of evaluative reasoning and critical thinking?
Please share your experiences in the comments section by clicking through to the post.