How to spot a ‘lip service’ approach to culturally responsive evaluation (a checklist for evaluation clients)

So you’ve put out an RFP for an evaluation of a policy, program or initiative intended to serve and effect positive change in a “minority” community. All the proposals look terribly impressive, and they all include “cultural experts” on the evaluation team. How can you distinguish the proposals that show a clear understanding of what it takes to do effective and culturally responsive evaluations from those that merely pay ‘lip service’ to cultural competence?

How to spot an evaluation proposal that is likely to miss the point:

  • The only brown faces on the evaluation team are the $15/hr research assistants
  • “Cultural expertise” is budgeted for data collection and translation services, NOT for the initial conceptualization, development of evaluation questions to guide the evaluation, or evaluative interpretation of evidence
  • There is a leap to outcomes measurement without any attention to questioning the fundamental assumptions. The evaluation proposal takes as given that the policy, program or initiative is the right one for this community, and that the intended outcomes (goals) as defined by funder and/or provider are the criteria to be used.

How to spot an evaluation proposal that “gets” the relevance of cultural expertise and cultural values:

Key point to look for in the evaluation proposal Key implications for evaluation quality & value
1a. The project is either led by someone who is a member of the relevant cultural group (and has the required language and cultural expertise, as well as strong evaluation expertise)
OR
1b.
The cultural experts on the team include one or more senior, seasoned, credible “heavy hitters” who are positioned in high-influence roles on the evaluation team – AND the daily rates budgeted for them reflect that they are considered high-value senior team members*
Credibility - findings are more believable if the evaluation team has the necessary expertise Symbolic - conveys that cultural expertise is valued, respected, and taken seriously – strong link to validity points in #2 & #3 belowUtility - providers and the community are more likely to use findings that have come from a credible source
2. Engagement with the community is to be fronted and led by a senior cultural expert who has appropriate connections and credibility in that community and who drives the engagement and determines the necessary regard for protocol, approach, and context Symbolic - the seniority of the ‘front person’ is indicative of how important the project is and how serious the evaluation team is about getting it right

Credibility
- the evaluation team is more credible when fronted by – and when engagement is driven by – the right person with the right knowledge  and skillsValidity - honest responses are more likely when community engagement is led by someone credible who knows how to engage effectively and appropriately with the community
3. Cultural experts’ roles are built into the evaluation proposal not just in data collection and translation services, but also in:

  • initial conceptualization of the entire evaluation
  • development of overarching evaluation questions to guide the evaluation
  • careful identification of which outcomes should be considered “valuable” (or “detrimental”) in the cultural context (a.k.a. needs and strengths assessment)
  • careful identification of what aspects of programming and implementation should be considered “high quality” and “culturally appropriate”
  • evaluative interpretation of findings, including any further probing to ensure evidence is correctly understood and interpreted
Validity - the evaluation is highly likely to come to invalid conclusions without the right cultural expertise being applied in ALL these components of the evaluationUtility - providers and the community are more likely to use findings that clearly reflect the needs, strengths, aspirations, and values in the community
4. There is a specific process built into the plan to share preliminary findings with the community and to allow them to correct any misinterpretations or misrepresentations Validity - good quality assurance practice in any evaluation to check understandings Ethics -  part of ethical and responsible engagement with the community, particularly where there has been a history of misrepresentation or misinterpretationUtility - providers and the community are more likely to use findings that have been through a careful process and have been approved as valid by the community
5. If results are to be published, there is a specific process in place to get informed community consent to do this and to allow them to vet (and veto if necessary) any content before the paper is submitted – or to say no to publication. Ethics – communities have a right not to be researched, studied, and published about against their will, particularly when the person publishing the findings is effectively making a name for themselves professionally by becoming a published “expert” on that community

* I’ll have some more to say in another post about how we should be valuing cultural expertise in $$ terms.

Reflections from discussions with some of Aotearoa’s leading Pasifika and Maori evaluators

Last week I attended a really invigorating regional symposium in Auckland run by anzea (Aotearoa New Zealand Evaluation Association) where we had a ‘critical mass’ of some of the top Maori and Pasifika minds in the profession.

The final session for the conference was a plenary discussion led by top Samoan evaluator Pale Sauni, who facilitated a  powerful and constructive discussion about where evaluations in Pasifika communities can go wrong and what we as an evaluation community can do to support and promote the best possible evaluation practices that respect, support, and include the needs, values, and aspirations of those communities. A lot of the points listed above arose in the discussion, and I’d like to acknowledge all those involved for helping me clarify my thinking on this.

I particularly liked Pale’s reference to how ‘lip service’ approaches can seriously undermine the validity of the data itself. Paraphrasing (but hopefully not misrepresenting) what he said …

When the evaluation process, the questions, and the design don’t reflect a clear understanding of community values and aspirations, what you’ll get is “McDonalds” evidence:

“Would you like lies with that?”

Pay our people $20 gift vouchers, come in for your 10-minute interview, and that’s exactly what you’ll get: Lies.

As someone behind me in the audience muttered, “Yep, Margaret Mead all over again …”

More to come …

I have a few more thoughts on this topic that I hope to get to soon in future posts, including:

Utility – providers and the community is more likely to

8 comments to How to spot a ‘lip service’ approach to culturally responsive evaluation (a checklist for evaluation clients)

  • Hi Jane,

    This made me smile and cringe as it was so on point, still in 2010.Particularly appreicated your frame of comparing the proposal to the standards of quality and value in evaluation. Elements that resonated most strongly with me included:

    1a. The project is either led by someone who is a member of the relevant cultural group (and has the required language and cultural expertise, as well as strong evaluation expertise).

    Comment: Given the development of the evaluation field and its lack of diversity at the senior level, at least in the US, it is challenging to have the experience and instiutional recognition and infrastrucuture to be seen as “safe” enough for funders to invest in emerging organizations in this role.

    1b. The cultural experts on the team include one or more senior, seasoned, credible “heavy hitters” who are positioned in high-influence roles on the evaluation team – AND the daily rates budgeted for them reflect that they are considered high-value senior team members*

    Comment: Really appreciate the notion that you get what you pay for.

    2. Engagement with the community is to be fronted and led by a senior cultural expert who has appropriate connections and credibility in that community and who drives the engagement and determines the necessary regard for protocol, approach, and context

    Comment: Important to note that this cultural expert should not be defined by appearance but by experience.

    3. Cultural experts’ roles are built into the evaluation proposal not just in data collection and translation services,

    Comment: It starts within design and extends far beyond translation in to sense making.

    Great start to my week thinking about this and its implications. Much appreciated.

  • Jane Davidson

    Thanks so much for your comments, Jara. Thanks too for highlighting that key point – it’s important not just to have *any* “brown face” on the evaluation team; it’s critically important that there’s at least one (and preferably more) who’s senior, credible, knowledgeable about evaluation, and experienced with the relevant community or culture.

    Another major problem with having only very junior evaluators of color on an evaluation team is that it unfairly puts a huge weight of responsibility on them to question the evaluation approach and challenge the project director (someone much more senior, experienced, respected in the [mainstream] profession, and who has power over their employment and/or academic grades).

    We had a really interesting discussion at the anzea symposium about people’s experiences with being the “bit on the side” – the token brown face on the team that would help win the contract, but actually having an insufficiently influential role to ensure the evaluation was conducted in a culturally appropriate way.

    I’ve been burned by this kind of thing myself – not as the token brown face (obviously!) but as the person whose expertise was presented in the proposal as key for ensuring the evaluation would be up to scratch. When push came to shove and the budget was tight, my time got pruned right down to next to nothing, just reading and critiquing draft reports. But what was even worse, my advice was ignored in draft after draft, with the same fundamental flaws coming back over my desk again and again. In the end I had to ask that my name not be used on the final report because the quality was so far below anything I would have approved. The client felt like a victim of a “bait and switch” and so did I.

    More on “you get what you pay for” – and more – in another post!

    Jane

  • Mark Dalgety

    Hi Jane
    I missed this post when it went up. Thanks for putting into the form of a table the processes and their implications to support a culturally responsive evaluation through the composition of the team and their roles and responsibilities. I have appreciated the way more communities are articulating and asserting what processes enhance their wellbeing and making transparent the hidden power relations involved. I have worked with former refugees here in New Zealand and one organisation has developed principles and standards for engaging with their communities. I have found this a helpful ‘good practice’ guide for various contexts and it includes their expectations about community involvement in the team.
    http://www.goodpracticeparticipate.govt.nz/working-with-specific-groups/other-ethnic/standards-for-engagement.pdf
    I too appreciated Pale’s workshop at ANZEA in Wellington where he conveyed with his Samoan wit and brevity, the common experience and frustrations of the Pasifika culture being a last minute add on rather than an integrated component of research and evaluation. One part of the discussion I valued was our experience of “white mainstream” knowledge being automatically privileged and assumed as the default for practise, rather than an option to be explored to see whether the underlying values fit or are in conflict with the context. Your table is another tool to become more conscious and systematic in addressing these hidden (My visual image is of an iceberg) ethnocentric drivers.

  • Jane Davidson

    Mark, many thanks for your comments and especially for the excellent ‘good practice’ guide for working with refugee communities. I’ve saved that one to my stash of worthwhile papers!

  • Debbie Miller

    Hi Jane, I love your points. An additional comment about the color of faces is to consider that some people who have grown up as “third culture kids” (TCKs) may have considerable cultural expertise in certain situations without appearing to meet this criteria based on appearance. TCK’s have spent a significant portion of their formative years living in a culture other than their parents’ own but then have returned to their parents’ culture of origin. The tendency is then to be a “hidden immigrant” (appearing to identify with a culture they do not), relating to elements of multiple cultures without relating fully to any single culture. See, for example, Pollock, D. C., & van Reken, R. E. (2009). Third culture kids: The experience of growing up among worlds (3rd ed.). Boston, MA: Nicholas Brealey North America.

  • Jane Davidson

    Debbie, that’s very interesting – I’d never heard of third culture kids before. Have just read through Wikipedia’s entry on TCKs, which was very interesting, so many thanks!

    For me it also raises a flipside of that point – that cultural expertise is developed from lived experience, not genetics, so that being genetically a member of a particular group does not automatically make one a cultural expert. Some grow up far from their cultural roots, sometimes not even knowing “who they are” until later in life. Some are adopted into a culture that is different from the one they were born into. Some are raised in a bicultural and bilingual family, but get much less exposure to one culture than the other.

  • Tarina MacDonald

    Like the many comments before me have expressed. A great post of the “lipservice” to culturally responsive evaluations”. As a post-grad student of evaluation who’s grappling with the theories and principles of evaluation in one paper and the methods and techniques in another, it is very refreshing to come into this forum and get a ‘REAL’ education on evaluation.

    Jane, your panoply of cultural considerations hit it on the nail. It not only reflects the brevity of your knowledge in the evaluation field, but also the integrity to ensure that minorities are not manipulated by evaluators who want to make their name at the expense of and on the back of communities who experience REAL issues and challenges with programs that aren’t useful to them or the people within those communities but have to because there are no other alternatives.

    Project leadership/appointments/demarcations, pay rates, sub-contractor rates, lip service titles, and exploitation of knowledge experts e.g. kaumatua or elders is a classic, that the term ‘dial a kaumatua’ has been coined. So thank you for the clarity on assessing quality evaluation proposals, definitely something I’ll add to my reference list.

  • Jane Davidson

    Kia ora, Tarina, for your comment!

    You have reminded me to put in a link to a later post I wrote, on Valuing cultural expertise in $$ terms.

    As Jara said earlier, there’s a “you get what you pay for” factor, but I actually think it’s more problematic than this, as reflected in the ‘dial a kaumatua’ (dial an elder) approach you remind us of.

    There’s a pervasive problem of underpaying people who contribute cultural expertise that is absolutely critical to the evaluation.

    I find this not only disrespectful; I think it is highly symbolic that their expertise is not truly valued or taken seriously in the evaluation. And that has important implications for evaluation validity, credibility, and therefore utilization.

    Three things I think can be done about this:

    1. Evaluators with cultural expertise make sure their daily/hourly rates reflect the value of what they do – see the “valuing cultural expertise in $$ terms” post!

    2. Evaluators who hire people with cultural expertise to work alongside them on projects need to

      (a) hire “heavy hitters” who will hold them to account, not whipper-snappers they can easily control or ignore;
      (b) partner with them in evaluation conceptualization and other key roles; and
      (c) pay them at a rate that reflects and respects the value of their contributions.

    3. We ALL need to educate clients to understand the difference between lip service and a genuinely culturally responsive approach to evaluation (that was the intent of this post, so please share with clients, people!!) – and convince them that communities, taxpayers, and other stakeholders are not fooled by ritualistic and insincere “cultural window-dressing”.