Getting the definition of evaluation right is not simply a matter of having a popularity vote about it.
The fact that so many don’t see a clear difference between evaluation and other pursuits (such as research, monitoring, audit, organization development, management consulting) doesn’t mean that there isn’t one.
I just couldn’t resist commenting on this LinkedIn discussion, where the question was:
As a member of the CES National Council, a colleague and I have volunteered to try to come up with a definition of evaluation for the CES. So at this stage, we are using various lines of evidence to find out how people define evaluation or what makes research evaluation. Another suggestion I was given was to ask people to draw what evaluation means to them.
I am doing my part of the search through social media to see what kind of responses I get. I will do a content analysis, look for similarities and difference and maybe come up with an image depicting what I’ve heard. Another colleague is conducting a literature review with a grad student of hers, and the last is going to conduct a brief survey with users of evaluation.
What we noticed at our last CES national council meeting, was that we really should have a definition of evaluation on our website. But none of us could agree on a definition. Hence our exploration of different perspectives.
The fundamental difference is that evaluation asks and answers questions about the quality, value, and/or importance of things (design, implementation, outputs, outcomes, impacts, the project/program/policy/etc as a whole, and so on).
If we’re not doing that, we’re not actually doing evaluation.
And that has serious implications for our practice, and how well we can convey the value added of our entire profession.
I really enjoyed Michael Scriven’s webinar the other day (from the Claremont Evaluation Center), where he was talking about why it is absolutely essential we get clear on this:
This means, for example, being clear about the methodologies that are *distinctive* to evaluation – in contrast with the methodologies that are also part of other related disciplines.
There are still people who identify as evaluators but who say you should *never* make a value judgement. They don’t like to be known as still hanging onto the value-free doctrine, but in fact that’s what they are.”
There are some more snippets of what he said on Michael Scriven’s Facebook page – and I’m hopeful Claremont Graduate University will post the recording of the webinar before too long.
Michael and I are proposing joint workshops on this topic (Evaluation-Specific Methodology) at both the American Evaluation Association and the Australasian Evaluation Society conferences this year. So, if accepted, we look forward to clarifying this for those who are interested.
What do you think?