A fleshed out ‘program logic’ for why and where ‘insiders’ are included in evaluation
Kia ora (many thanks!), Robyn, for your comments and challenge to make the insider/outsider evaluation team ‘logic’ more explicit. And to Judy Oakden for some offline comments that identified another row that needed to be added (#4). Here’s a further elaborated version of the table presented in Patricia and my earlier post that discussed the implicit reasons for creating evaluation teams with cultural (and other) ‘insiders’ in different proportions and in different roles. See what you think …
|Implicit “Problem” or “Challenge” Addressed||Insider Inclusion Rationale||Likely Practice Implication
(How ‘Insiders’ Are Involved)
|Likely Evaluation ‘Product’|
|1. language/cultural communication barrier challenge||making data collection easier||
||evaluation framed and conducted using ‘mainstream’ or ‘outsider’ process, worldview, and methodologies|
|2. cultural klutz problem (“we might approach it wrong and offend people” and/or “we need to demonstrate respect for the way things are done around here”)||helping ensure the evaluation process is appropriate for the context (“process values”)||
||evaluation engagement and data collection conducted using a culturally appropriate process; evaluation content and evaluative interpretation may not include ‘deep’ cultural values (see #5)|
|3. credibility issue (“they won’t listen to outsiders”)||enhancing the credibility and/or impact of the evaluation||Depending on credibility “in whose eyes” …
||evaluation is ‘fronted’ by highly credible insiders in interactions with the community, the program staff, and/or the funder/client; the ‘messenger’ is credible; whether the message itself deserves credibility is a separate issue (see #5)|
|4. exclusion as unethical (“these people have the right to a voice at the table”)||social justice, ethics/fairness, authentic inclusion||
||an evaluation that strongly reflects the perspective of the community, its interests, concerns, and its historical exclusion from the opportunity to be heard; the product is likely to have high credibility in the community; validity may be high or low depending on how well #5 is addressed|
|5. validity challenge (“we won’t get the questions, the criteria, or the evaluative conclusions right if the evaluation team is outsiders only” or “we may be missing something important if we are missing this voice in the work”)||including insiders’ values in the evaluation content itself (“deep values”)||
||provided the process is competently run by someone with high levels of evaluation expertise (i.e., the questions, criteria, etc are not based on cultural expertise alone), the product is likely to have high levels of validity and relevance for the context; credibility and utilization are not guaranteed, but are aided by an evaluation with good validity|
|6. buy-in and/or learning challenge (they won’t “get” the findings and/or they won’t act on them unless they discover and understand them with their own eyes”)||developmental evaluation, capacity building, and learning||
||slower and generally more expensive evaluation process, but with potentially high payoffs in evaluation capacity building, utilization, and program development/improvement; the value of those payoffs hinges on high evaluation validity (#5), credibility and decision-making authority of those directly involved (#3) and how effectively they share those insights and learnings with other colleagues|
As I worked on this, one thing that also comes through as critically important is where evaluation expertise (as opposed to content and context expertise) sits on a team, and how much of it (and what kind) is there at all. Some of this is to do with one’s philosophical underpinnings, specifically about the nature of “truth”. Some questions to reflect on:
- Is there such a thing as “getting the evaluation right” or “getting the evaluation wrong”?
- Is there an important distinction to be made between “a well-founded evaluative conclusion” and “a strong consensus among everyone in the room”?
- If “truth” (in this case, a sound evaluative conclusion, i.e., one worth acting on) is a socially constructed product of sensemaking, how do we know that a particular conclusion isn’t off-base as a function of who is and isn’t in the room?
- Should the evaluator simply be a facilitator who helps people reach whatever conclusion makes sense to them, or should he/she coach and guide people in the question formation, values definition, and evaluative interpretation of evidence?
- What kinds of evaluator expertise/competency/capability are needed in order to guide each of the five examples listed above?
- What kinds of insider expertise are needed? What should you seek out when looking for cultural insiders to take the various different roles listed above?