I’m not sure I can come up with a ‘Copernican’ revolution of the scale Michael Scriven described in his previous post, but perhaps I can run an idea up the flagpole that has came as a realization or light-bulb moment for me and still seems to surprise and sometimes amaze other people I talk to and work with …
There is a long-held belief (both internal and external to the evaluation profession) that evaluations, particularly program evaluations, that draw explicitly evaluative conclusions (about how valuable outcomes are, how well-designed and implemented programs are, whether they are a waste of money or outstandingly cost-effective, etc) are somehow diametrically opposed to or completely incompatible with culturally responsive evaluations that fully reflect and respect the cultural values and worldviews of indigenous peoples and others whose voices are often not heard.
To put this another way, many people believe that the ‘valuing’ or ‘making evaluative judgments’ part of evaluation is all about imposing ‘mainstream values’ and that is fundamentally at odds with an inclusive or social justice-oriented evaluation agenda. On the flipside, there’s another assumption that a culturally responsive evaluation is somehow a weak, ‘warm fuzzy’ type of evaluation that never says anything hard-nosed about quality or value.
Over the past few years it’s become very clear to me that explicitly evaluative approaches are one of the most important, systematic and meaningful ways in which cultural values and worldviews can be built not just into how evaluations are conducted, but right into the heart of the evaluative criteria, the evaluative interpretation/sense-making, and the evaluative conclusions.
In fact, the key explicitly evaluative processes – (1) identifying criteria such as ‘outcomes of value’, (2) defining ‘how good is good’ on those criteria (e.g. using evaluative rubrics – see Evaluation Methodology Basics), (3) interpreting the evidence against the ‘how good is good’ definitions to draw explicitly evaluative conclusions – not only require cultural values, aspirations, and worldviews to be built right into them (otherwise they won’t be valid); they also lend themselves very naturally to highly participatory evaluation processes (if this is the chosen approach).
One thing that has surprised me when having various groups work on developing rubrics that better reflect community and cultural values is that the groups often set far tougher standards than I would have done for them if I’d been doing this independently.
OK, it’s probably nowhere near ‘Copernican’ as a rethink of evaluation, but it’s one of a string of long-cherished false dichotomies that has prevented theory and practice from moving forward. JMHO …