Rethinking evaluation’s intellectual silos

Michael Scriven has had us working our gray matter harder than usual this week trying to come up with a new ‘Copernican’ revolution for evaluation. The ensuing discussion has covered, among other things, the point that certain [‘Northern’ and especially ‘North American’] views of the world (and their accompanying assumptions, methodologies) have, historically, been treated as ‘the default’. Michael responded:

Right, the view that people down under have an inverted view of what’s ‘really’ true is an excellent example of a loaded perception. That opens the mental door for replacing the Northern epistemology/ethics/concept of family etc. with a less biased one. BUT, apart from redoing maps so that NZ is on top, what exactly IS the new concept going to be, i.e., the new world view in the specific fields of epistemology, ethics, methodology, and what useful new results does it produce? Copernican revolutions are not JUST assertions that an alternative view is right, they are fully argued proofs that it’s better e.g., because it explains or predicts some phenomena that the original view did not. OR, not quite as good but very important, the revolutionary view provides an equally good general account.

In other aspects of the discussion, we have confirmed that a Copernican revolution doesn’t have to be one that there is broad consensus on, just one that’s based on sound logic and a different way of thinking about evaluation.

Well, let me try for a very southern hemisphere-flavored candidate for rethinking evaluation globally – the realization that the various different evaluation theories, approaches, models, and methodologies are not in fact ideologies to which one swears lifelong allegiance. Rather, some of the best genuine evaluations are the ones that ‘sample across the silos’ and combine approaches that were heretofore thought to be incompatible.

Let me explain where I’m coming from here. I did my early evaluation training in the United States (as an international student) and attended the AEA conference every year I was there. As with any situation where one is the cultural outsider, there are things that seem puzzling, things that the profession seems to take for granted, things that take a while to decipher.

The big one that struck me was how evaluation theories, models, approaches, and methodologies were viewed far less as tools or perspectives to draw on as and where appropriate, but more as aspects of one’s identity as an evaluator. It’s considered quite normal in the States to identify as  ‘theory-driven evaluator’ or an ‘empowerment evaluator’ or a ‘qualitative evaluator’ – and to use that chosen approach in every evaluation done. As an outsider, this struck me as bizarre.

Now, as Bob Williams has very correctly pointed out to me, one of the major reasons for this is that the U.S. evaluation market is huge. It’s big enough to allow evaluators to position themselves as specialists, take calls only from clients who require that type of evaluation, and basically make a career out of that niche. This is one advantage of a large economy in that people can afford to bury themselves deeply in one area and devote enough time to it to develop it fully.

Where I (and other kiwis, and many other evaluators from around the world) live and work, the evaluation market – and the economy in general – is much smaller. Although every evaluator or evaluation consulting firm has a range of services they promote themselves as being particularly skilled at, we generally have to have many more options up our sleeves. If we were very specialized, we’d be seen as useless. Put another way, New Zealand is a ‘generalist’ culture (we need and value breadth) whereas the U.S. is a ‘specialist’ culture (where depth in very specific knowledge is much more respected).

The small size of our economy means that evaluators here can’t afford to position ourselves as ‘wedded’ to one particular theory, approach or methodology. The pieces of work are often quite small too, so it’s often impossible (within budget constraints) to put together a decent-sized team to cover a wide range of expertise. As a result, individual evaluators need a diverse toolkit in this context in order to survive. We need to be able to blend approaches to fit the situation and change midstream if necessary.

So, blends of evaluation approaches that are often viewed as heretic in the States (and definitely raise a few eyebrows at the AEA conference – “What are those crazy kiwis and Aussies up to now?”) are considered normal and expected here. Some of the work that I’ve done includes a goal-free theory-based evaluation, and a participatory/collaborative, explicitly evaluative, utilization-focused evaluation. There are many more, but I haven’t stopped to invent names for them.

I’d like to think the power of blending approaches will filter north across the equator and that we’ll start seeing more and more of it in the future. Perhaps Marv Alkin’s next edition of Evaluation Roots won’t have an evaluation theory tree splitting into ever smaller and more specialized twigs (populated mostly by U.S.-based specialists), but a wild, hybrid, sprawling bush with grafts from one branch sprouting out of another and vines intertwined with mingling sap, all laden with interesting and delicious fruits of new ideas.

10 comments to Rethinking evaluation’s intellectual silos

  • Charles Lusthaus

    Hi Jane,

    Your observations about the US–“specialist oriented evaluation personality”, is right on! I moved from the US to Canada about 30 years ago and have observed the same phenomena. Over the past 30 years doing evaluations in Canada, has allowed us to choose evaluation approaches, have opportunities to work in diverse sectors, with diverse organizations, to partner with diverse colleagues, and work in diverse cultures (native, urban immigrant, international)–Such diversity, I think provides the evaluator with a different way of knowing about–and understanding the data we use to evaluate phenomena.

    This experience has led me to marvel and be very suspicious about those who call for a “gold standard’ in evaluation”.

    Great insight–would love to see some follow up on this idea!

    charles

  • gilles mireault

    Good morning from upnorth !

    I’ve been reading all the postings from last week discussion and find them truly inspiring. I did not contribute because I must admit I still have to struggle with this value concept which is at the basis of all evaluation,so I understand.

    I write to the blog because I want to thank all contributors for a very stimulating discussion week.

    I really feel in my work context ( social sciences led) that there is not enough of this kind of alternative thinking when choosin methodologies and doing evaluation.

    Hope to read more about other relevant topics in the future ! Keep the good work

  • First I want to say how satisfying it is to see this conversation happening and it is in multiple places and spaces. The other thing I wanted to add to the mix is that within the US there is also a generational difference in how evaluation, regardless of what moniker you place before it, is perceived and practiced.

    As someone in their early 40’s who came to evaluation or rather evaluative work as a means to strengthen the efforts of organizations engaged in social justice and equity work, for me it has always had a value based bent including both mine and that of our clients. It has always been a tool in service of something greater and thus the full range of theories and methods have always been part of the toolkit.

    There may be more similarities with other non western approaches to evaluation if you move down the age continuum. What do others think?

  • David Turner

    I’ve only just discovered this blog, but I look forward to following it. You’ve got several interesting threads started. The analogy of a Copernican revolution leads me to think about what was distinctive and successful about that intellectual change in the first place, which is that the Copernican (heliocentric) view offered a simpler and more direct way of explaining the observed astronomical patterns than the prevailing Ptolemaic system. It’s not that the old system didn’t fit the data, but it was a complicated and unwieldy system. Can we find ways of explaining what we observe in policies or programs more simply and convincingly? That would be a Copernican-like change.

    If we want to rethink our approaches to evaluation we need to look both at how we do evaluation (technical changes) and how we show our clients and/or stakeholders that our work has value. Actually, that issue of demonstrating how evaluation itself adds value could be a topic for a new discussion thread! The issue also leads me to consider whether, beyond a rethink of evaluation, we need a rethink of usual approaches to policy development as well (look at what we evaluate as well as how we do it).

    I look forward to following this blog and perhaps contributing further.

  • Charles, Gilles, Jara and David — Many thanks for your comments!

    Jara, I do think there’s a generational difference there too, which gives me hope (and I personally know of a lot of fantastically talented ‘next generation’ evaluators, particularly the 40-somethings generation). But I am also seeing a lot of forces working against the recognition of the newer generation of silo-spanning evaluators (and their ideas), have puzzled about why this is, and would be interested in your (or anyone else’s) reflections. My response to this got a bit long, so I’ll post it in a new thread.

    David, I keep coming back to question a point that you have raised here and also that Michael did in the snippet I quoted above in the original post … Are we trying to “explain and/or predict” something about evaluation (or programs or policies, as you suggest)? I’m not so sure.

    Yes, there was something more simple and elegant about putting evaluation at the center as the “sun” rather than treating program evaluation, policy analysis, personnel evaluation, product evaluation, etc as separate and unrelated universes or solar systems. But the big thing for me about that move was not that it explained or predicted, but that it advanced methodology and practice by allowing us to see themes across those domains and take ideas from one and apply them to another. I see the call for silo spanning across the various evaluation theories and approaches as having similar benefits but not predictive or explanatory power …

    Thoughts?
    Jane

  • i have enjoyed reading your comments on evaluation silos. I am pleased that I am one of the crazy kiwis referred to at AEA conferences as I believe that we keep our minds open to all approaches not only choosing one and to reflect on whether the approach used is applicable to the diversity within the organisations we work with. I have noted that certain specialist oriented evaluation personalities do have more “air time” than others and I guess as evaluators we need to keep ourselves knowledgable of all approaches to make informed decisions.

  • Jane et al.,

    Scriven was here at Western Michigan University just today – presenting some of the same thoughts – many of which I am familiar with, many not. Even so, I think that I am with Jane here: There are forces working against US (those of us working in the United States) – which now faces extreme prescriptions regarding methods-choice.

  • Hi Jane,

    I agree that there is not exactly a full embrace of “silo spanning” approach to evaluation. This tension seems to exist in both the field itself and among those who have had tremendous impact on how the “sciance” of evaluation has emerged, the public and philanthropic sector. I will respond to you other post to continue this exchange and hope others will join.

    I see a similar discussion happening in the field of leadership. There are notions of old, new and most recently now generations of leadership.

    On one hand these distincions betweeen approaches seem helpful in terms of understanding how we might all work together towards common and shared outcomes. On the other hand, I wonder if they are not just creating new divides.

  • One further point I made to Jane (and to many others over the years) about the specialist/generalist distinction between large societies (I fled from Britain partly because I couldn’t stand the specialist culture). It is the irony that generalism is a specialism. It requires skills that are quite, um, specialised. Without that generalism is merely a more polite way of saying that you know a little about a lot. And I think generalism is more than that. Maybe we should develop a course on generalism .. the new specialism. Think about how that would title would fit on an airport bookstall. Remember you read it here first … or maybe not, you see I don’t know everything about publishing.

  • I wonder, Jane, if your observation in the U.S. is more related to academia. In practice, then a client wants an evaluation, they are looking for a competent evaluation team that can help bring greater clarity to the questions, help them decide who to talk to and what type of sampling to use, gather and analyze quantative and qualitative data, interview people and run focus groups, etc. Clients rarely specify that they want an… empowerment evaluator, or a … theory-driven evaluator. We all use all the tools and approach we have depending on the needs and resources of our evaluations. Professors, however, trying to make a name for themselves need to pretend that there “different” evaluation schools of thought. That is what I always assumed at AEA, and so I tolerated as an academic need!