Commissioning XGEMs – the sequel

In Monday’s post, Extreme Genuine Evaluation Makeovers (XGEMs) for Commissioning, I discussed a way of kicking off the process of selecting an evaluator for a project by suggesting that well-designed EOIs would often be more informative, less onerous (on both sides), and a better way to ‘shortlist’ providers than some of the woefully ineffective RFP processes around.

Today the topic is the next step: how to evaluate your shortlist of evaluators and figure out who’s got what it takes.

Step 2: Identify what will make or break the evaluation – and evaluate on that (instead of on the mind-numbing detail)

Many RFPs ask for incredible amounts of detail on how an evaluation will be done, and get very similar descriptions from proposers about what kinds of data they would use, data collection methods, what software they will use to conduct analyses, … a lot of mundane detail that won’t help you distinguish among proposals.

For most (if not all) evaluations, it is actually (IMHO) unsound practice to do such detailed planning up front without first engaging with the client to understand their real needs, the political and other aspects of context, the key evaluation questions that need to be answered, and so forth. Perhaps my experiences have been particularly woeful, but I have yet to see an RFP that gives enough of that detail to allow me to plan a seriously useful, genuine, and actionable evaluation.

How about a completely different approach instead?

Instead of selecting based on the evaluation plan, select based on the evaluation team’s capability to handle the key challenges that are likely to make or break the evaluation. For example:

  • A frequent problem that makes evaluations unactionable (not to mention non-genuine!) is that they fail to actually answer evaluative questions. Pick a question that you really need answered evaluatively – like how substantial and valuable a certain set of outcomes are – and ask the proposers how they would answer such a question, and to please include an example of such a question answered in another evaluation report. [If the answer is no more than “we’d measure the following variables and test for statistical significance” or “we’d interview the following stakeholders”, that’s another one for the bin!]
  • Another frequently botched aspect of evaluations is that qualitative and quantitative data are presented separately and not synthesized or woven to arrive at defensible answers to evaluative questions. Ask how the evaluators generally go about doing this, and again, ask for an example.
  • Most evaluations look at outcomes, and the vast majority of these require at least some causal inference to show that the outcome actually “came out of” the evaluand. Once again, ask what mix of methods the evaluators would use under the constraints of the evaluation in question (and of course, ask for an example).
  • If you need answers to value-for-money questions but have several important but intangible outcomes to include, ask how the evaluators would go about doing this – and again, ask for an example where they’ve done it.
  • If there are any potentially contentious or important-to-grasp concepts or judgment calls to be made, ask the evaluators to explain their understanding of the concept and/or how they would go about defining it for the purpose of the evaluation, and making a defensible judgment.
  • If there are any particularly challenging differences in values or perspectives among key stakeholders, organizational politics, or a difficult contextual issue (e.g., restructuring in process) ask the evaluators to describe their approach to handling such situations – backed if possible with examples of how they have successfully navigated such challenges in the past.

Responses to these questions could be in writing, orally (in a 2-hour presentation/interview) or both. Experience suggests that the oral presentation/interview option may have the edge of allowing the panel to gauge real-time critical and evaluative thinking and communication skills, as well as likely credibility to key evaluation audiences.

In each case, the selection panel would be looking for (a) a well-reasoned and sound answer that was clearly going to provide a clear and direct answer to the question and/or (b) an effective way of navigating the usual (and some of the unusual) challenges presented by evaluation – plus some evidence that the provider had done this in the past with a similarly challenging project.

Be sure to obtain copies of the full evaluation reports that illustrate the above capabilities, and (as Susan Kistler says in her comment to the previous post) check that the conclusions in the executive summary are actually justified using well-woven evidence, evaluative methodology, and evaluative reasoning.

1 comment to Commissioning XGEMs – the sequel

  • Absolutely! I would love to be involved in a tendering process like this. Recently I’ve submitted two tenders, notable only because of their similarly onerous up-front requirements. My approach was to submit as streamlined a response as possible, emphasising that my approach is flexible and that the final methods couldn’t be negotiated until I’d met the client and discussed the emerging issues for the evaluation. In both instances the client struggled with the concept of a ‘flexible’ proposal. In some ways, I can understand a commissioner’s hesitation to award a contract to an evaluator who talks in broad terms about methodological approaches. But I think there is definite value in the interview style approach you’ve mentioned. Your approach is a higher-level filtering, which could be followed by asking a short-list to prepare a more detailed proposal with more detailed information. I’d put in a proposal for a tender like that!