I raised a few eyebrows last week when I mentioned the idea of Evaluation-Specific Methodology (ESM) as being an essential part of what defines us as a discipline.
Of course, a large proportion of people who identify as evaluators consider that evaluation is merely the application of social science research methods to support decision making – or something along those lines.
So, not surprisingly, someone asked (in the LinkedIn discussion that started me on this series of blog posts) what everyone else was probably wondering:
“What methodologies are those?”
The methodologies distinctive to evaluation are the ones that go directly after values.
Examples of evaluation-specific methodologies include:
- needs and values assessment
- merit determination methodologies (blending values with evidence about performance, e.g. with evaluative rubrics)
- importance weighting methodologies (both qualitative and quantitative)
- evaluative synthesis methodologies (combining evaluative ratings on multiple dimensions or components to come to overall conclusions)
- value-for-money analysis (not just standard cost-effective analysis or SROI, but also strategies for handling VfM analysis that involves a lot of intangibles, for example
I wouldn’t count the following as evaluation-specific: statistics or any of the standard research methods (interviews, observations, surveys, content analysis, or even causal inference methodologies). We clearly draw on these and use them a lot, but they are not distinctive to evaluation because they are not specifically about the “values” piece.
In other words, you could use these (non-evaluative qualitative and quantitative research methods) and still NOT be doing evaluation.
But if you are using ESM (evaluation-specific methodology), you sure ARE evaluating, i.e. drawing conclusions about quality, value, or importance.
And in fact, if you don’t use any ESM, you basically aren’t doing real, genuine evaluation. Either you are skipping the whole evaluative conclusions piece, or you are getting to it by logical leap (e.g. “I looked upon it and saw that it was good”). ESM is what allows us to get systematically and transparently from evidence about performance to evaluative conclusion, by weaving in the values (“how good is good”) piece.
It’s true that several disciplines use evaluation-specific methodologies (e.g. industrial & organizational psychology uses cost-effectiveness analysis). That doesn’t make them “not evaluation-specific” any more than statistics becomes psychology just because psychologists use it.
As I mentioned in the previous post, Michael Scriven and I have proposed a pre-conference workshop on Evaluation-Specific Methodology for AEA (in Washington DC) and a mini-workshop in the main program for AES (in Brisbane, Australia) this year, so if accepted we look forward to clarifying these concepts further!
My 2-day AEA workshop, Actionable Evaluation, has already been approved, and will cover (among other things) how to use evaluative rubrics to draw explicitly evaluative conclusions.
And, of course, see also the following books from Michael and myself!
[Already read these? Who would each one be particularly useful for? Please add your thoughtful evaluative reviews on Amazon – click the book to post one now!]
- Why “What’s the best tool to measure the effectiveness of X?” is totally the wrong question
- Credentialing – identifying the ‘core’ vs ‘specialized’ competencies
- The real values behind ‘value-undiscussable’ evaluation
- “No value-free”: The importance of visible values
- How good is a ‘good’ outcome?
- The importance of values for substantiating evaluative conclusions
- Why genuine evaluation must be value-based