A fleshed out ‘program logic’ for why and where ‘insiders’ are included in evaluation

Kia ora (many thanks!), Robyn, for your comments and challenge to make the insider/outsider evaluation team ‘logic’ more explicit. And to Judy Oakden for some offline comments that identified another row that needed to be added (#4). Here’s a further elaborated version of the table presented in Patricia and my earlier post that discussed the implicit reasons for creating evaluation teams with cultural (and other) ‘insiders’ in different proportions and in different roles. See what you think …

Implicit “Problem” or “Challenge” Addressed Insider Inclusion Rationale Likely Practice Implication
(How ‘Insiders’ Are Involved)
Likely Evaluation ‘Product’
1. language/cultural communication barrier challenge making data collection easier
  • insiders hired for fieldwork, translator, and interpreter roles
  • unlikely to hold senior/influential/leadership roles on the project unless other rationales are also in play
evaluation framed and conducted using ‘mainstream’ or ‘outsider’ process, worldview, and methodologies
2. cultural klutz problem (“we might approach it wrong and offend people” and/or “we need to demonstrate respect for the way things are done around here”) helping ensure the evaluation process is appropriate for the context (“process values”)
  • insiders may lead the evaluation
  • insiders take highly visible evaluation process design, facilitator, and community/client contact roles
  • may also be asked to be a ‘critical friend’, advising on the way in which the evaluation is conducted
evaluation engagement and data collection conducted using a culturally appropriate process; evaluation content and evaluative interpretation may not include ‘deep’ cultural values (see #5)
3. credibility issue (“they won’t listen to outsiders”) enhancing the credibility and/or impact of the evaluation Depending on credibility “in whose eyes” …

  • relatively senior insiders will take key roles in engaging with the community and/or the client/funder;
  • when engaging with the funder, a different kind of ‘insider’ may be used, i.e. may be an ‘insider’ relative to the board/senior management/funder rather than an insider to the recipient community
evaluation is ‘fronted’ by highly credible insiders in interactions with the community, the program staff, and/or the funder/client; the ‘messenger’ is credible; whether the message itself deserves credibility is a separate issue (see #5)
4. exclusion as unethical (“these people have the right to a voice at the table”) social justice, ethics/fairness, authentic inclusion
  • insiders extensively consulted on the overall framing of the evaluation
  • authentic participation by insiders in the evaluation process itself
  • insiders can influence key decisions about the evaluation design, conduct, and reporting; may have veto rights
an evaluation that strongly reflects the perspective of the community, its interests, concerns, and its historical exclusion from the opportunity to be heard; the product is likely to have high credibility in the community; validity may be high or low depending on how well #5 is addressed
5. validity challenge (“we won’t get the questions, the criteria, or the evaluative conclusions right if the evaluation team is outsiders only” or “we may be missing something important if we are missing this voice in the work”) including insiders’ values in the evaluation content itself (“deep values”)
  • insiders take front-end conceptual roles (determining what ‘good quality programming’ and ‘high value outcomes’ mean in this context)
  • insiders also take back-end evaluative interpretation and sense-making roles
  • insiders may lead the evaluation, but if not, will be senior and present in ‘critical mass’ numbers
provided the process is competently run by someone with high levels of evaluation expertise (i.e., the questions, criteria, etc are not based on cultural expertise alone), the product is likely to have high levels of validity and relevance for the context; credibility and utilization are not guaranteed, but are aided by an evaluation with good validity
6. buy-in and/or learning challenge (they won’t “get” the findings and/or they won’t act on them unless they discover and understand them with their own eyes”) developmental evaluation, capacity building, and learning
  • insiders heavily involved in setting the evaluation agenda (e.g., questions, focus), interpretation of major findings, and development of recommendations
  • insiders may be involved in data collection, if “seeing things with their own eyes” is an important part of the learning (and/or unlearning) process
slower and generally more expensive evaluation process, but with potentially high payoffs in evaluation capacity building, utilization, and program development/improvement; the value of those payoffs hinges on high evaluation validity (#5), credibility and decision-making authority of those directly involved (#3) and how effectively they share those insights and learnings with other colleagues

As I worked on this, one thing that also comes through as critically important is where evaluation expertise (as opposed to content and context expertise) sits on a team, and how much of it (and what kind) is there at all. Some of this is to do with one’s philosophical underpinnings, specifically about the nature of “truth”. Some questions to reflect on:

  • Is there such a thing as “getting the evaluation right” or “getting the evaluation wrong”?
  • Is there an important distinction to be made between “a well-founded evaluative conclusion” and “a strong consensus among everyone in the room”?
  • If “truth” (in this case, a sound evaluative conclusion, i.e., one worth acting on) is a socially constructed product of sensemaking, how do we know that a particular conclusion isn’t off-base as a function of who is and isn’t in the room?
  • Should the evaluator simply be a facilitator who helps people reach whatever conclusion makes sense to them, or should he/she coach and guide people in the question formation, values definition, and evaluative interpretation of evidence?
  • What kinds of evaluator expertise/competency/capability are needed in order to guide each of the five examples listed above?
  • What kinds of insider expertise are needed? What should you seek out when looking for cultural insiders to take the various different roles listed above?

6 comments to A fleshed out ‘program logic’ for why and where ‘insiders’ are included in evaluation

  • Many thanks to Judy Oakden for some offline discussion that has helped identify a missing row in the table (now #4). I have edited the above post to add this.

  • Hi All

    I think we must rethink the outsider & insider debate in a system thinking perspective and in the evolving internet effects in our life. Please follow these six live evidences for better and soon understanding my opinions:

    http://en.oreilly.com/gov2fall09/public/sv/q/190

    http://cspcs.sanford.duke.edu/blog/kramer/toward_a_new_paradigm_progress_or_regress

    The Structure of Philanthropic Revolutions
    Elements of a New Paradigm: The Importance of Scale
    Elements of a New Paradigm: Evaluation
    Elements of a New Paradigm: Beyond Theories of Change

    Best

    Moein

  • Patricia Rogers

    A recent paper explores the issue of being somewhere between an insider and an outsider:
    Negotiating Insider and Outsider Identities in the Field: “Insider” in a Foreign Land; “Outsider” in One’s Own Land
    Ayça Ergun and Aykan Erdemir

    Middle East Technical University, Ankara, Turkey, ayer@metu.edu.tr
    Middle East Technical University, Ankara, Turkey

    The authors present a self-reflexive and comparative account of their fieldwork experiences in Azerbaijan and Turkey to examine insider and outsider identities of researchers in settings that are neither unfamiliar nor fully familiar. It is argued that the researcher is often suspended in a betwixt-and-between position in the transformative process. This position is not necessarily a transitional one that leads to either the inclusion or exclusion of researchers by informants. Rather, the insider-outsider relationship can be conceived as a dialectical one that is continuously informed by the differentiating perceptions that researchers and informants have of themselves and others.
    Field Methods, Vol. 22, No. 1, 16-38 (2010)

  • Very interesting, and I think extremely relevant to Moein’s reminder (thank you, Moein!) about the importance of systems perspectives.

    To me the concept of boundaries seems very pertinent here – where we draw them, what dimensions/criteria we use to determine who’s an insider and who’s an outsider and in what respects.

    And of course, the whole concept of insider/outsider doesn’t fit that well when the recipient population is extremely diverse, e.g. a general population initiative. In such cases, all evaluators who are residents of the country/population in question are ‘insiders’ (generally speaking); the only true outsider would be a foreigner. But as the Turkish researchers remind us, even foreigners aren’t necessarily outsiders, or at least not in all respects. So, there’s a lot of boundary surfing and boundary questioning that’s relevant here.

    The other thing that keeps coming back to me when I look at the table is that it rather implies that ‘outsiders’ are on the evaluation team by default and get to choose whether to hire ‘insiders’. I don’t actually think that’s an accurate reflection of either reality (although perhaps it is in some parts of the evaluation universe) and I definitely don’t think that’s the way the world should be; it’s just one frame to take to highlight the different thinking and motivation that goes into outsiders’ thinking about who should be on ‘their’ evaluation teams.

    I do want to pick up the flipside that Paula White raised, i.e. considering insiders as the ‘default’ and asking when and why outsiders might be included in various roles. And taking this a bit further, back to the question I raised earlier, about why we weren’t seeing more evaluations of ‘mainstream’ and general population programs and policies being evaluated by predominantly minority evaluation teams to deliberately provide a completely new perspective.

    More on that in a later post …

    Jane

  • The boundary critique issue struck me too. In effect the table is a “soft” boundary critique. “Soft” in that it explores the consequences of setting the boundary of who is “in” and who is “out” of a particular boundary decision, but doesn’t submit the decision to a full blown assessment. Two of the key boundary dimensions those in the critical systems field identify are around “expertise” (and the potential false sense of security engendered by the expertise decisions taken) and the whose voices are “heard”. I think looking at the table and the discussion points afterwards with Ulrich’s Critical Systems Heuristic framework in mind could be a way of toughening up some of the statements.

  • Jane Davidson

    Thanks for this, Bob. I added the link to Ulrich’s mini-primer on CSH in case others want a look at it too. It looks very interesting. I’m going to ponder this one before my next post on the topic.