There’s a great discussion going on right now on the AEA Thought Leaders’ Forum. This week it’s being led by Jean King, who has raised the question of credentialing for evaluators.
Not all our subscribers are AEA members and following this forum, so I’m just cross-posting a revised and expanded version of a contribution – and encourage you all to check out the wider discussion!
The problem of competency ‘laundry lists’
One problem with the various lists of evaluation competencies we see around is that they cover an enormous range of the skills that evaluators have and use in our work, but FAR MORE than any one evaluator (or even one evaluation team) could or even should have.
This leads people to think that:
- “competent” = “can demonstrate every single one of the competencies”
- “missing a few” = “incompetent”
… and of course, because no-one has the full repertoire, even top-notch evaluators will be looking at the list and saying “What?! You’re calling me incompetent because I can’t [insert skill]?”
It seems to me that we need to distinguish between:
- “the core” – the absolutely essential stuff that you really must have if you are to call yourself an evaluator
- “specialized competencies” – the specific methodologies, content areas, and other specialties that you choose to be particularly strong in
Defining ourselves professionally
I think we need to do this at two levels:
- defining ourselves as a profession (by defining “the core”)
- defining ourselves as individual evaluators, evaluation teams, or evaluation units or businesses (by defining our specialized competencies and approaches – which must include the core)
Defining “the core” of our profession
I think we all agree that there are people who pedal evaluation services who basically have no idea of the difference between evaluation and, say, measurement, or descriptive research.
They are generally not aware that there are degrees or certificates in evaluation or professional associations for evaluators – and if they were aware, they probably wouldn’t opt in anyway because they don’t believe there’s anything unique about evaluation, nothing worth talking about, puzzling over, improving on.
So, what is that “core”?
In various discussions I’ve had with colleagues about this, somehow we keep coming back to one thing as being the fundamental difference, the core of what distinguishes evaluation (done right) from other work, and that is the values and ‘valuing’ piece:
- We ask questions about how good/worthwhile/valuable/important things like design, implementation, and outcomes are;
- We actually have a shot at answering those questions (not just free associating to them with whatever data seems vaguely relevant)
In the New Zealand context, we have strong agreement that cultural values are absolutely central to this – how we define what’s good/worthwhile/valuable/important (both the process of doing this and what ends up in the criteria, plus how we evidence it).
The recent NDE (#133), edited by George Julnes, is a fantastic resource for thinking really seriously about how we as evaluators judge value in evaluation. It’s a must read!
Defining “who we are” as evaluation practitioners
Every individual evaluator and every evaluation consultancy/business/contracting unit needs to be clear about “who they are” as evaluators – what is it that distinguishes their practice or approach from that of others working in this space?
It’s impossible for any individual or even any evaluation team or consultancy to be all things to all people – and it is dishonest to imply that we are.
So, who are you? What are you particularly good at? What defines your approach? And, importantly, what are you NOT strong in? What kind of work do you steer clear of?
It is up to each evaluator (and each evaluation unit/business/consultancy) to define the profile of competencies they want and need to develop in order to work effectively in the space they have carved out for themselves.
YES, that means it’s perfectly OK to position yourself as (for example) someone who does highly collaborative evaluation, works primarily with qualitative evidence, works in the United States, in English-speaking communities of color, on programs related to addiction and homelessness – so long as you are doing that core evaluative activity of asking and answering evaluative questions – like how good the program design is, how well it’s been targeted and implemented, how valuable the outcomes have been so far, and so forth.
If this were you, you’d likely turn down work that involved heavy number crunching or non-English speaking participants or a requirement for a very independent style of evaluation.
It doesn’t make you any less of an evaluator if you have specialized in a particular approach, context, or content area; it simply means you are focusing on getting really good in that space.
And nor is the generalist evaluator any less competent for choosing to practice across a range of domains, drawing on others’ expertise as required.
Credentialing – who is ‘in’? Who gets sidelined?
Credentialing (if we need it – and the answer to this varies depending on where you live and work – see Michael Scriven’s post on the Thought Leader Forum discussion) has the potential to wrongly include or exclude people.
It also has the potential to appropriately include and exclude.
Here’s my take on inclusion/exclusion:
- We will inappropriately exclude if we define the “must have” competencies more widely than what really genuinely is at the core of evaluation. [Or if we use a long list of competencies and assume they are all required to do any decent evaluation.]
- We will inappropriately include if we say there is no core, or if we define it wrongly (e.g. as measurement or monitoring or applied research or providing information for decision making or …).
It’s always important to consider carefully who wins and who loses when any particular credentialing system is initiated – and whether one is needed at all.
We’ve had this discussion in New Zealand and decided no, we don’t need or want credentialing at this point. Instead, we are opting for:
- A list of competencies that practitioners can use to self-assess, reflect, and plan their professional development
- Professional development aligned with the needs most lacking and desired by professional association members
- Efforts to build the capability of clients so they become more effective evaluation scopers, purchasers, project managers, utilization advocates, and (in some cases) collaborators
Related posts and references
- AEA Thought Leaders’ Forum (April 2012 – Jean King leading discussion on credentialing)
- Promoting Valuation in the Public Interest: Informing Policies for Judging Value in Evaluation (NDE #133)
- Lifting the quality of evaluation #2: Capable evaluators who know their ‘space’