Why genuine evaluation must be value-based

Every now and then the question is raised about whether evaluation really needs to incorporate “values”.

Can’t we just measure what needs to be measured, talk to the right people, pass on whatever they (the people and the data) say? Why is there a need to dig into the messiness of “value”? Do we really need to say anything about how “substantial” or “valuable” an outcome is in the scheme of things, whether an identified weakness is minor or serious, whether implementation has been botched or aced, whether the entire program is heinously expensive or an incredible bargain given what it achieves?

Do we really need to do anything seriously evaluative in our evaluation work?

Yes, we do.

“Evaluators” vs. “evaluation(s)”

First of all, let’s just be clear that professional evaluators – people with evaluation knowledge, skill sets, and other relevant capabilities – do many different kinds of tasks as part of their work, and not all of these are actually evaluations. In my own work, I run training and development workshops on evaluation; I advise clients on evaluation strategy; I help them with evaluative thinking … none of these involve anyone actually DOING evaluation.

Sometimes I conduct evaluations myself, solo or as part of a team, and sometimes I facilitate a client group  through the process of developing and conducting an evaluation themselves. In both of these situations something is being done that is called “evaluation”. In the former case, I (or my team) am the one doing the evaluating. In the latter case, the client group is doing it with my help.

A common error in reasoning is to say “I’m an evaluator, therefore every activity I do as part of my work should be called evaluation.” This, in my view, is a key barrier in the debate about what evaluation is and is not.

E-valu-ation – it’s not spelled like that for nothing!

One thing that makes something genuinely an e-valu-ation is that it involves asking and answering questions about quality or value. For example …

  • It’s not just measuring outcomes; it’s saying how substantial, how valuable, how equitable those outcomes are.
  • It’s not just reporting on implementation fidelity (did it follow the plan?); it’s saying how well, how effectively, how appropriate the implementation was.
  • It’s not just reporting whether the project was delivered within budget; it’s asking how reasonable the cost was, how cost-effective it was, and so forth.

Who actually identifies and applies “values” in an evaluation?

Are these tasks always done by “a” person, “the” evaluator? Of course not. In a participatory, empowerment, or other collaborative evaluation, the quality and value questions are asked and answered, in the end, by those who are participating in running the evaluation.

Hopefully this is done with the guidance of an evaluator who uses his or her knowledge of evaluation logic and methodology to help the participants

(a) identify what the “valuable outcomes” and dimensions of “high quality programming” are or might be;
(b) develop robust definitions of “how good is good” on those criteria;
(c) gather a good mix of evidence;
(d) interpret that evidence relative to the definitions of “how good is good”; and
(e) bring all the findings together to consider the ‘big picture’ evaluation questions.

In some cases, the evaluator may do the data collection piece on behalf of the client, but may facilitate them through the process of evaluative interpretation.

In both of the above situations, the “evaluator” is not actually DOING the evaluating, but rather is facilitating and coaching a group of people through the process of doing it themselves. The evaluation specialist still needs the knowledge and skills to be able to help people do this in a robust and defensible way.

Why bother with the “values” piece at all?

Why would a client ask for an “evaluation” as opposed to a “measurement project”, for  example?

Clients don’t just need to know “what are the outcomes?” (what’s so); they need answers to evaluative questions (so what) that are going to help inform decision making (now what).

From time to time I am asked to review evaluations that have been completed and the client is trying to figure out why they have not been at all useful. Time after time I see examples of where the contractor has gone ahead and done a measurement or research job instead of an evaluation. The client has “findings” but they are descriptive rather than evaluative in nature, and there are important dangling questions left unanswered. Questions  like, “OK, you’ve identified a problem, but is it serious or not?” Or, “The outcomes look like a bit of a mixed bag – have we done well on the important ones or only on the trivial ones?” Or, “There are lots of interesting stories in here, but was the whole initiative a waste of time and money or not?”

Unless questions like this – the evaluative questions – are actually answered, the client is left with something that is hard to interpret, hard to action, hard to make use of.

It’s the evaluative, i.e. value-based, nature of evaluation that actually makes it useful (talking here about instrumental use in particular; process use is a whole other conversation!). That’s why clients ask for it (although they may not have the knowledge to articulate their needs quite like I have here), and that’s what we should deliver.

4 comments to Why genuine evaluation must be value-based

  • Gilles Mireault

    Dr Davidson and Rogers,

    Seems like you have a good idea to create this blog !

    As an internal evaluator in a child protective services agency in Québec city I sure need to read, talk and think on how to do genuine evaluation in my daily work.

    It’s not easy to do. One of the first challenge is to distangle research questions vs evaluation questions. People want a lot of informtation on their programs, projects, services but don’t necessarily insist on the value side of their evaluand. Maybe that’s why we ended up frequently with lot of data and not much evaluation.

    I still find very difficult to conduct a good genuine evaluation. Maybe we could discuss more on this on other posts.

    Keep the good work

  • I’m liking this blog!

    I think one of the points you’ve hit on is the difference between researchers who do evaluation, and evaluators.

    My background is community psychology, so I wouldn’t think of doing an evaluation that wasn’t useful for improving services or finding better ways to do something. We are trained to conduct research and do evaluations to take action. In the case of program fidelity, it’s not only important to measure it, but also to record adaptations. Sometimes the adaptations or departures from fidelity are what makes a program work better.

    I’m currently in a position where we receive a lot of funding from the State and the evaluation they want done is merely bean counting for accountability. It mostly has nothing to do with gathering information that can be useful. The question here is how do we convince funding agencies that they need to include sufficient funding for the evaluation team to do something useful?

  • Jane Davidson

    Giles and Susan, many thanks for your insights on this. You have touched on another of my hobby horses.

    I’m actually going to start a new post for this because I think it’s really important. Thanks for raising the issue!

    New post: Who’s responsible for un-genuine evaluation?

  • Liz Riley

    Very interesting and very much in line with my views too. Here (the UK) I think the problem stems from an assumption that evaluators should have academic research skills set (just look at some of the job adverts to see what’s required) rather than the skills to elicit the views and ideas of the various sets of people impacted by the work.

    I’ll check out your new post too

Leave a Reply

  

  

  

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>