Specialist evaluators or content-free evaluators?

Open Clip Art Tobias Moon http://www.openclipart.org/detail/35563

One of the continuing debates in evaluation is about the relative benefits of choosing an evaluator with significant content knowledge compared to a more experienced evaluator with little or no content knowledge.  As is often the case, the answer depends on the details of the content.

I started thinking about this issue again when an RFP (Request For Proposal) dropped into my email Inbox, sent there because I have a keyword filter for ‘evaluation’

Support to Weapons Effects Group in Helicopter Vulnerability and Lethality Analysis

Small team, short timeframe

When the evaluation is small and short, there isn’t space on the team to have separate content experts and evalution experts, and there isn’t time for the generalist evaluator to come up to speed, then you really need to combine them both. A content expert evaluator comes with relevant conceptual frameworks (such as theories of change, or classifications of outcomes or strategies), potentially suitable measures and indicators, and possibly relevant benchmarks of performance.

I’ve recently been involved in commissioning an evaluation in this context, and one of the selection criteria was content knowledge in the area.

Large team, long timeframe

If there is a larger team, and more time, a generalist evaluator can add value to a team by helping the content experts to be more explicitly evaluative, by challenging some of their assumptions which are not supported by evidence, and by bringing in evaluation methods and conceptual frameworks from other areas which could be relevant.

I’m just starting work on a monitoring and evaluation project in the area of family law, which is not an area I know about.  But I am working as part of a team in an organization where I have worked as the non-content expert before, and we have time for me to have some ‘Family Law 101′ briefings before we begin work together.

But where is the cut-off point between these scenarios.  Or are there exceptions where an alternative would be better?

6 comments to Specialist evaluators or content-free evaluators?

  • Horses for courses. There may be specific situations where only a subject expert can make valid judgements – eg evaluating clinical decisions. But as a starting point my bias is towards finding a context-appropriate mix of evaluation-specific and content-specific knowledge. The range of solutions is limited only by our creativity but might include, depending on the circumstances: a content-free evaluator guided by an expert advisor or advisory group; a subject expert guided through an evaluative process (developing and using clear, specific, fit-for-purpose criteria) by an evaluation expert; or, wherever time and resources permit, an integrated team drawing together a range of disciplines, content knowledge and evaluation-specific knowledge, working together at all stages of the process. Love your blog.

  • I don’t think it’s ever an either-or, it is always a both-and. To really provide thoughtful feedback (not necessarily recommendations which is another argument) that will add value to the client, you have to absolutely know evaluation and also have content expertise somewhere on the team. If is a solo job, find an expert you can talk to and who would be willing to look over your analysis/report. That is, unless the client merely wants a technician to collect data and present findings absent any real analysis.

  • Kelci Price

    My bias is always towards choosing an evaluator with strong evaluative skills, since I think there are other ways to bring in the content expertise when that’s a critical element. I have certainly had evaluations where I did not have the content knowledge I needed to make judgments about the program, so I had to find an expert to assist. However, I’ve also seen too many evaluations which were carried out by people steeped in content, but with little evaluation skill. I always feel that these situations are more damaging than the reverse, since without appropriate evaluation training the questions are often not very helpful, analyses are inappropriate, and the conclusions are riddled with methodological problems.

  • Patricia Rogers

    It would be interesting to review ToRs (terms of reference) and RFPs(Requests for Proposal) to see the evaluator competencies, knowledge, expertise and experience they call for. How many and which ones ask for: content knowledge and qualifications; direct experience of the programs; formal evaluation training and qualifications; previous experience evaluating these types of programs? And this also leads to the question of certification and accreditation. Hmm – things to ponder…

  • Jane Davidson

    In discussions with clients, the more reflective ones are starting to realize that the times when they have most wasted the evaluation dollar are (1) when they value content expertise over evaluation expertise and (2) when they turn researchers loose to do an evaluator’s job.

    There are some times when content expertise is crucial, but actually, in recent work I find I am being asked to work on something not only because I have the evaluation expertise to bring something useful, but also because I don’t have the content expertise and therefore won’t get lost in the details or favor a particular theory or perspective …

    The RFP that demonstrates a client is clued into these points is a more attractive one to respond to – much better to work with clients who already “get it”.

  • Simon Payne

    I think the notion of content-knowledgeable evaluators is very closely related to the issue of context-knowledgeable evaluators. As someone who works in many countries, I learnt early on not to assume that I always understood what was going on. In most cases, the more content and contextual understanding you can bring to a job the better but I think we always need to be cautious about the limits of our knowledge. The primary thing we bring to a job should always be our evaluation skills.