Credentialing – identifying the ‘core’ vs ‘specialized’ competencies

There’s a great discussion going on right now on the AEA Thought Leaders’ Forum. This week it’s being led by Jean King, who has raised the question of credentialing for evaluators.

Not all our subscribers are AEA members and following this forum, so I’m just cross-posting a revised and expanded version of a contribution – and encourage you all to check out the wider discussion!

The problem of competency ‘laundry lists’

One problem with the various lists of evaluation competencies we see around is that they cover an enormous range of the skills that evaluators have and use in our work, but FAR MORE than any one evaluator (or even one evaluation team) could or even should have.

This leads people to think that:

  • “competent” = “can demonstrate every single one of the competencies”
  • “missing a few” = “incompetent”

… and of course, because no-one has the full repertoire, even top-notch evaluators will be looking at the list and saying “What?! You’re calling me incompetent because I can’t [insert skill]?”

It seems to me that we need to distinguish between:

  1. “the core” – the absolutely essential stuff that you really must have if you are to call yourself an evaluator
  2. “specialized competencies” – the specific methodologies, content areas, and other specialties that you choose to be particularly strong in

Defining ourselves professionally

I think we need to do this at two levels:

  1. defining ourselves as a profession (by defining “the core”)
  2. defining ourselves as individual evaluators, evaluation teams, or evaluation units or businesses (by defining our specialized competencies and approaches – which must include the core)

Defining “the core” of our profession

I think we all agree that there are people who pedal evaluation services who basically have no idea of the difference between evaluation and, say, measurement, or descriptive research.

They are generally not aware that there are degrees or certificates in evaluation or professional associations for evaluators – and if they were aware, they probably wouldn’t opt in anyway because they don’t believe there’s anything unique about evaluation, nothing worth talking about, puzzling over, improving on.

So, what is that “core”?

In various discussions I’ve had with colleagues about this, somehow we keep coming back to one thing as being the fundamental difference, the core of what distinguishes evaluation (done right) from other work, and that is the values and ‘valuing’ piece:

  • We ask questions about how good/worthwhile/valuable/important things like design, implementation, and outcomes are;
  • We actually have a shot at answering those questions (not just free associating to them with whatever data seems vaguely relevant)

In the New Zealand context, we have strong agreement that cultural values are absolutely central to this – how we define what’s good/worthwhile/valuable/important (both the process of doing this and what ends up in the criteria, plus how we evidence it).

The recent NDE (#133), edited by George Julnes, is a fantastic resource for thinking really seriously about how we as evaluators judge value in evaluation. It’s a must read!

Defining “who we are” as evaluation practitioners

Every individual evaluator and every evaluation consultancy/business/contracting unit needs to be clear about “who they are” as evaluators – what is it that distinguishes their practice or approach from that of others working in this space?

It’s impossible for any individual or even any evaluation team or consultancy to be all things to all people – and it is dishonest to imply that we are.

So, who are you? What are you particularly good at? What defines your approach? And, importantly, what are you NOT strong in? What kind of work do you steer clear of?

It is up to each evaluator (and each evaluation unit/business/consultancy) to define the profile of competencies they want and need to develop in order to work effectively in the space they have carved out for themselves.

YES, that means it’s perfectly OK to position yourself as (for example) someone who does highly collaborative evaluation, works primarily with qualitative evidence, works in the United States, in English-speaking communities of color, on programs related to addiction and homelessness – so long as you are doing that core evaluative activity of asking and answering evaluative questions – like how good the program design is, how well it’s been targeted and implemented, how valuable the outcomes have been so far, and so forth.

If this were you, you’d likely turn down work that involved heavy number crunching or non-English speaking participants or a requirement for a very independent style of evaluation.

It doesn’t make you any less of an evaluator if you have specialized in a particular approach, context, or content area; it simply means you are focusing on getting really good in that space.

And nor is the generalist evaluator any less competent for choosing to practice across a range of domains, drawing on others’ expertise as required.

Credentialing – who is ‘in’? Who gets sidelined?

Credentialing (if we need it – and the answer to this varies depending on where you live and work – see Michael Scriven’s post on the Thought Leader Forum discussion) has the potential to wrongly include or exclude people.

It also has the potential to appropriately include and exclude.

Here’s my take on inclusion/exclusion:

  • We will inappropriately exclude if we define the “must have” competencies more widely than what really genuinely is at the core of evaluation. [Or if we use a long list of competencies and assume they are all required to do any decent evaluation.]
  • We will inappropriately include if we say there is no core, or if we define it wrongly (e.g. as measurement or monitoring or applied research or providing information for decision making or …).

It’s always important to consider carefully who wins and who loses when any particular credentialing system is initiated – and whether one is needed at all.

We’ve had this discussion in New Zealand and decided no, we don’t need or want credentialing at this point. Instead, we are opting for:

  1. A list of competencies that practitioners can use to self-assess, reflect, and plan their professional development
  2. Professional development aligned with the needs most lacking and desired by professional association members
  3. Efforts to build the capability of clients so they become more effective evaluation scopers, purchasers, project managers, utilization advocates, and (in some cases) collaborators

Related posts and references


6 comments to Credentialing – identifying the ‘core’ vs ‘specialized’ competencies

  • Michael Kiella

    As a finishing doctoral student. I have a boatload of trouble with this discussion: it is difficult for me to accept that someone with a nice website is an “evaluator”.

    Lawyers, physicians, optometrists etc. all have to ascend beyond the position of “my opinion” to be certified as competent to practice. I was appalled at being asked for a consult at the AEA annual meeting in San Antonio TX by an “evaluator” with a high school education.

    This is a matter of self advocacy and academic integrity. It is time for this pandering to the under-prepared. Are academic pursuits nil? It is time to stop the political correctness of inclusion for the unqualified. Please let them continue their practice, whatever it might be…but let’s not call them Evaluators.

    Either we are a profession, or we are not. I prefer the former.

  • Jane Davidson

    Michael, thanks for your comment!

    Are you saying that credentialing should be based on academic qualifications rather than competencies? If so, which academic qualification(s) should qualify and which should not?

    If you believe credentialing should be based on competencies, then which do you believe should be non-negotiable, i.e. the core?

    Or, perhaps you are saying there should be some minimum academic qualification first and then some competency requirement? I’d be interested in your thoughts on this.

  • As someone who’s gone through the Canadian Evaluation Society’s Credentialing process, I have to say I found it a really valuable exercise.

    First of all, going through the application and responding to each of the identified competencies helped me to define what kind of evaluator I am, what my values are, where my strengths are, and a clear idea of where I want to focus my future professional development.

    Second of all, for all the core competencies that are out there, it was this credentialing process that actually FORCED me to create some time to do the structured reflection you speak of. Left to my own busy devices I might never have made the time. Many Canadian evaluators keep telling me they plan to get to it, but just haven’t had a chance.

    Just my two cents on a Friday afternoon,


  • Cath Taylor

    As someone who decided that evaluation is the career path they want to follow I have struggled with all aspects of competencies, accreditation etc. How do you become an evaluator?
    Here in NZ it appears you fall into the role then once there decide to perhaps undertake education/professional development/learning on the topic. When I decided that I would like to look at becoming an evaluator I discovered the ANZEA competencies list, and then looked for academic qualifications/courses that would provide me with the specific evaluation skills that I lacked. But there aren’t any for a novice/beginner. I am undertaking the Post Graduate Diploma in Social Sector Evaluation at Massey University, but this qualification requires me to draw on my practical experience in order to complete the course work. That’s a bit difficult when you don’t have any. And where are the apprenticeships or internships so that I can get some practical experience whilst undertaking this qualification?
    Without an accreditation or certification system how do I know I am on the right path in training myself to be an evaluator? Is evaluation really a “profession” that one can aspire to if academic courses or qualifications and internships etc are few and far between, or non-existent?

  • Patricia Rogers

    Hi Cath,

    Evaluation is a tricky area – it is not a profession in terms of having gatekeepers, certification of practitioners or accreditation of courses. But there are some good courses around. Having a list of competencies, such as the anzea list, or the earlier AES list, or the CES list, does help to map one’s existing knowledge and identify areas to develop further. It would be good to encourage more use to be made of these. The Evaluators Institute ( the USA has a map of how its short courses (1,2,3 or 5 day) cover a range of evaluation knowledge and skills, but few institutions do this sort of mapping. This doesn’t stop you from doing this mapping, and providing this information to potential employees or clients.

    In terms of developing your skills, many people, myself included, have found it useful to volunteer to do evaluation work for community groups as a way of getting practical experience. Chelsea Heaven in today’s AEA365 post has discussed the value of volunteering, including getting involved with Statisticians without Borders.

    Finally, do come to evaluation conferences – they are a terrific part of becoming part of a collegial and professional community. The 2012 AES conference is on the last week of August in Adelaide – check out details at

  • Cath Taylor

    Thank you for your encouragement Patricia. Your comments act as reassurance that I am on the right path. I an using the AES and ANZEA competency lists to guide and reflect on my development. I was hoping to attend the ANZEA or AES conference this year but as a mother of 3 little ones whose husband is about to be deployed to Southern Sudan for 6.5 months this was/is not possible this year. However, I hope to attend some branch events. My husband’s work NZ Defence Force) may assist rather than hinder me next year however, as they have just established an evaluation unit who seemed open to the idea of my doing some work with them. I will check out the Statisticians without Borders site you mention – I worked for Statistics New Zealand for a few years.

    I am open to any development opportunities that come my way and will in the meantime keep reading, listening and learning from those within the ‘profession’.