How to spot a 'lip service' approach to culturally responsive evaluation
So you’ve put out an RFP for an evaluation of a policy, program or initiative intended to serve and effect positive change in a “minority” community. All the proposals look terribly impressive, and they all include “cultural experts” on the evaluation team. How can you distinguish the proposals that show a clear understanding of what it takes to do effective and culturally responsive evaluations from those that merely pay ‘lip service’ to cultural competence?
How to spot an evaluation proposal that is likely to miss the point:
- The only brown faces on the evaluation team are the $15/hr research assistants
- “Cultural expertise” is budgeted for data collection and translation services, NOT for the initial conceptualization, development of evaluation questions to guide the evaluation, or evaluative interpretation of evidence
- There is a leap to outcomes measurement without any attention to questioning the fundamental assumptions. The evaluation proposal takes as given that the policy, program or initiative is the right one for this community, and that the intended outcomes (goals) as defined by funder and/or provider are the criteria to be used.
How to spot an evaluation proposal that “gets” the relevance of cultural expertise and cultural values:
|Key point to look for in the evaluation proposal||Key implications for evaluation quality & value|
|1a. The project is either led by someone who is a member of the relevant cultural group (and has the required language and cultural expertise, as well as strong evaluation expertise)
1b. The cultural experts on the team include one or more senior, seasoned, credible “heavy hitters” who are positioned in high-influence roles on the evaluation team – AND the daily rates budgeted for them reflect that they are considered high-value senior team members*
|Credibility - findings are more believable if the evaluation team has the necessary expertise Symbolic - conveys that cultural expertise is valued, respected, and taken seriously – strong link to validity points in #2 & #3 belowUtility - providers and the community are more likely to use findings that have come from a credible source|
|2. Engagement with the community is to be fronted and led by a senior cultural expert who has appropriate connections and credibility in that community and who drives the engagement and determines the necessary regard for protocol, approach, and context||Symbolic - the seniority of the ‘front person’ is indicative of how important the project is and how serious the evaluation team is about getting it right
Credibility - the evaluation team is more credible when fronted by – and when engagement is driven by – the right person with the right knowledge and skillsValidity - honest responses are more likely when community engagement is led by someone credible who knows how to engage effectively and appropriately with the community
|3. Cultural experts’ roles are built into the evaluation proposal not just in data collection and translation services, but also in:
||Validity - the evaluation is highly likely to come to invalid conclusions without the right cultural expertise being applied in ALL these components of the evaluationUtility - providers and the community are more likely to use findings that clearly reflect the needs, strengths, aspirations, and values in the community|
|4. There is a specific process built into the plan to share preliminary findings with the community and to allow them to correct any misinterpretations or misrepresentations||Validity - good quality assurance practice in any evaluation to check understandings Ethics - part of ethical and responsible engagement with the community, particularly where there has been a history of misrepresentation or misinterpretationUtility - providers and the community are more likely to use findings that have been through a careful process and have been approved as valid by the community|
|5. If results are to be published, there is a specific process in place to get informed community consent to do this and to allow them to vet (and veto if necessary) any content before the paper is submitted – or to say no to publication.||Ethics – communities have a right not to be researched, studied, and published about against their will, particularly when the person publishing the findings is effectively making a name for themselves professionally by becoming a published “expert” on that community|
* I’ll have some more to say in another post about how we should be valuing cultural expertise in $$ terms.
Reflections from discussions with some of Aotearoa’s leading Pasifika and Maori evaluators
Last week I attended a really invigorating regional symposium in Auckland run by anzea (Aotearoa New Zealand Evaluation Association) where we had a ‘critical mass’ of some of the top Maori and Pasifika minds in the profession.
The final session for the conference was a plenary discussion led by top Samoan evaluator Pale Sauni, who facilitated a powerful and constructive discussion about where evaluations in Pasifika communities can go wrong and what we as an evaluation community can do to support and promote the best possible evaluation practices that respect, support, and include the needs, values, and aspirations of those communities. A lot of the points listed above arose in the discussion, and I’d like to acknowledge all those involved for helping me clarify my thinking on this.
I particularly liked Pale’s reference to how ‘lip service’ approaches can seriously undermine the validity of the data itself. Paraphrasing (but hopefully not misrepresenting) what he said …
When the evaluation process, the questions, and the design don’t reflect a clear understanding of community values and aspirations, what you’ll get is “McDonalds” evidence:
“Would you like lies with that?”
Pay our people $20 gift vouchers, come in for your 10-minute interview, and that’s exactly what you’ll get: Lies.
As someone behind me in the audience muttered, “Yep, Margaret Mead all over again …”
More to come …
I have a few more thoughts on this topic that I hope to get to soon in future posts, including:
- how we should be valuing cultural expertise in $$ terms
- mana-enhancing ways of learning from evaluations that ran into problems on the cultural front