“No value-free”: The importance of visible values

The “no value-free” line of the Genuine Evaluation song (composed by the incomparable Kataraina Pipi, evaluator and composer/musician, with input from several other genuine evaluators!) was inspired by an earlier post where we defined genuine evaluation and drew some lines in the sand about what was in and what was out.

One of the things that is definitely out is so-called ‘value-free’ evaluation.

What are ‘value-free’ evaluations?

These are “evaluations” (not!) that avoid the all-important task of answering any questions about how good, valuable, worthwhile, or important the outcomes, the design, the implementation, and/or the entire evaluand (program, policy, product, service, etc) are.

Value-free evaluations simply present a Rorschach inkblot of evidence, complete with various descriptive analyses, such as key themes, most prevalent responses, statistical significance of differences, graphs and illustrations of changes – BUT they stop short of saying whether these findings represent worthwhile outcomes or quality programming.

Why the ‘values’ bit is part of our job description

I’ve mentioned in an earlier post (Why genuine evaluation must be value-based) that the word eVALUation doesn’t have the word ‘value’ in it for nothing!

The whole point of evaluation is to ask and answer questions about quality, value and importance – which clients and other readers/listeners can USE in some way (e.g. to take action, to inform decision making, to inform future design or programming, etc).

If we fail to do this we are, in my view, shirking our responsibilities as evaluators. The message to the client is “You work it out.”

‘Value-free’ is often a mix of ‘value-undiscussable’ and ‘value-poor’

There is an important reality that those who commission, use, or produce ‘value-free’ work should bear in mind:

Values ARE in fact being applied in that work; they are just not being made explicit (that’s the ‘value-undiscussable’ bit) – and they don’t go far enough (the evaluation is ‘value-poor’).

Here’s what I mean by value-undiscussable:

  1. When we select the outcomes to be explored as part of an evaluation, we are implicitly saying that “these outcomes are potentially valuable” (the ones we decide to include) and “these outcomes are not” (the outcomes we decide not to bother exploring).
  2. When we are not clear and explicit about how and why the outcomes were selected for exploration and why others weren’t, we are making those value-based decisions undiscussable and [deliberately?] not open for question or challenge.

Here’s what I mean by value-poor:

  1. If we go to the bother of [implicitly or explicitly] declaring an outcome ‘potentially valuable’ (and this includes outcomes that are potentially detrimental – they have negative value), then why not take the next step and say something about just HOW valuable the actual outcome IS when the evidence is in. If we don’t bother with that part, the evaluation is value-poor; it uses only a weak application of values, excluding the most useful bit.

Does this basically mean that I have to be clear about “my personal values” as an evaluator and as a person?

That’s one useful place to start, and good practice in general, BUT I am not just talking about “personal values” here.

I am talking about how we define quality and value within a professional evaluation context.

I don’t mean “Which outcomes do I value?”

I don’t even mean “Which outcomes are valued by stakeholders?”

I mean “Which outcomes are demonstrably valuable in this context and for these recipients/impactees?”

The key here is that professional evaluation needs to make a case for which outcomes are “demonstrably valuable” in a particular context by going a lot further than just taking opinions or preferences out of their own heads or even the heads of stakeholders.

I’m not saying don’t ask stakeholders – that is almost always an extremely important source of information. But I am saying don’t stop there.

Outcomes are valuable, important, and worth achieving not just because somebody consciously desires or values them, but because they add value to people’s lives by helping them realize their potential, meeting a need, fulfilling an aspiration, making them healthier or more successful in work or business, etc.

The keys to evaluative transparency – visible values – are

  1. Defining things like “valuable outcomes”, “quality programming and delivery”, and/or “good value for money/time/effort” based on concrete evidence from a range of sources, not just by plucking opinions from people’s heads and saying that’ll do.
  2. This includes:
    • identifying the criteria (what we will look at) AND
    • the definitions of quality or value on those criteria (how we will interpret the evidence) AND
    • how the evidence is interpreted against those criteria AND
    • how the successes and disappointments, pros and cons are weighed in drawing overall conclusions about the evaluand

Visible values make for better quality ‘valuing’

The clearer and more transparent the ‘values’ that underlie the ‘valuing’ (or, evaluative reasoning) done in evaluation, the easier they are to criticize. This may make many evaluators extremely nervous – it’s like putting yourself up for target practice.

HOWEVER, as evaluators we are all ‘selling’ the idea that criticism and feedback are important for the improvement of programs, policies, products, services, and so forth. Surely the same applies to evaluation itself.

The upside is that, if the values and the evaluative reasoning are clear, criticisms are more likely to be aimed at the specifics of the evaluative reasoning (or the evidence used) rather than at the evaluator personally. So, the conversation shifts from “Well, that’s just YOUR opinion” to “I believe you have set the bar too high here when you define ‘acceptable’ levels on these outcomes.”

It’s those kinds of evaluative conversations that can really engage intended users AND lead to better evaluative thinking on the part of the client as well as the evaluator.

Related posts and other references:

Leave a Reply

  

  

  

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>