Every now and then the question is raised about whether evaluation really needs to incorporate “values”.
Can’t we just measure what needs to be measured, talk to the right people, pass on whatever they (the people and the data) say? Why is there a need to dig into the messiness of “value”? Do we really need to say anything about how “substantial” or “valuable” an outcome is in the scheme of things, whether an identified weakness is minor or serious, whether implementation has been botched or aced, whether the entire program is heinously expensive or an incredible bargain given what it achieves?
Do we really need to do anything seriously evaluative in our evaluation work?
Yes, we do.
“Evaluators” vs. “evaluation(s)”
First of all, let’s just be clear that professional evaluators – people with evaluation knowledge, skill sets, and other relevant capabilities – do many different kinds of tasks as part of their work, and not all of these are actually evaluations. In my own work, I run training and development workshops on evaluation; I advise clients on evaluation strategy; I help them with evaluative thinking … none of these involve anyone actually DOING evaluation.
Sometimes I conduct evaluations myself, solo or as part of a team, and sometimes I facilitate a client group through the process of developing and conducting an evaluation themselves. In both of these situations something is being done that is called “evaluation”. In the former case, I (or my team) am the one doing the evaluating. In the latter case, the client group is doing it with my help.
A common error in reasoning is to say “I’m an evaluator, therefore every activity I do as part of my work should be called evaluation.” This, in my view, is a key barrier in the debate about what evaluation is and is not.
E-valu-ation – it’s not spelled like that for nothing!
One thing that makes something genuinely an e-valu-ation is that it involves asking and answering questions about quality or value. For example …
- It’s not just measuring outcomes; it’s saying how substantial, how valuable, how equitable those outcomes are.
- It’s not just reporting on implementation fidelity (did it follow the plan?); it’s saying how well, how effectively, how appropriate the implementation was.
- It’s not just reporting whether the project was delivered within budget; it’s asking how reasonable the cost was, how cost-effective it was, and so forth.
Who actually identifies and applies “values” in an evaluation?
Are these tasks always done by “a” person, “the” evaluator? Of course not. In a participatory, empowerment, or other collaborative evaluation, the quality and value questions are asked and answered, in the end, by those who are participating in running the evaluation.
Hopefully this is done with the guidance of an evaluator who uses his or her knowledge of evaluation logic and methodology to help the participants
(a) identify what the “valuable outcomes” and dimensions of “high quality programming” are or might be;
(b) develop robust definitions of “how good is good” on those criteria;
(c) gather a good mix of evidence;
(d) interpret that evidence relative to the definitions of “how good is good”; and
(e) bring all the findings together to consider the ‘big picture’ evaluation questions.
In some cases, the evaluator may do the data collection piece on behalf of the client, but may facilitate them through the process of evaluative interpretation.
In both of the above situations, the “evaluator” is not actually DOING the evaluating, but rather is facilitating and coaching a group of people through the process of doing it themselves. The evaluation specialist still needs the knowledge and skills to be able to help people do this in a robust and defensible way.
Why bother with the “values” piece at all?
Why would a client ask for an “evaluation” as opposed to a “measurement project”, for example?
Clients don’t just need to know “what are the outcomes?” (what’s so); they need answers to evaluative questions (so what) that are going to help inform decision making (now what).
From time to time I am asked to review evaluations that have been completed and the client is trying to figure out why they have not been at all useful. Time after time I see examples of where the contractor has gone ahead and done a measurement or research job instead of an evaluation. The client has “findings” but they are descriptive rather than evaluative in nature, and there are important dangling questions left unanswered. Questions like, “OK, you’ve identified a problem, but is it serious or not?” Or, “The outcomes look like a bit of a mixed bag – have we done well on the important ones or only on the trivial ones?” Or, “There are lots of interesting stories in here, but was the whole initiative a waste of time and money or not?”
Unless questions like this – the evaluative questions – are actually answered, the client is left with something that is hard to interpret, hard to action, hard to make use of.
It’s the evaluative, i.e. value-based, nature of evaluation that actually makes it useful (talking here about instrumental use in particular; process use is a whole other conversation!). That’s why clients ask for it (although they may not have the knowledge to articulate their needs quite like I have here), and that’s what we should deliver.