Don’t drop the ball: Five key messages for getting to a ‘good’ evaluation

Recently I was asked to review an evaluation report that the client expressed disappointment in because the report did not adequately address the key evaluation questions. (See ).

Whilst it was possible to make some recommendations to improve the overall presentation of findings, any suggestions were limited because the evaluation approach, methodology and data collection had been completed – and these core components could not be revisited.

A simple logic might be –> disappointing evaluation report –> implement report review suggestions –> improved evaluation report –> improved evaluation.  Of course an evident flaw in this logic is that an improved evaluation report does not necessarily equate to an improved evaluation and ‘tinkering’ with the output of the evaluation effort – the evaluation report – will in many cases be too little, too late.

This  raised for me the bigger question of ‘what does it take to get a ‘good’ evaluation?’ Focusing on the discrete components of an evaluation is of course valuable, but in reality there is  need for each of the core evaluation components, tasks or activities all to be done well.

So what does it take to get a ‘good’ evaluation?

This is not an easy question to answer and there are many textbooks and evaluation checklists dedicated to achieving this very goal.  They are often written for evaluation practitioners, and therein lies part of the problem.

Whilst not wanting to trivialize the diversity and complexity of the practice of evaluation, I got to thinking: What four or five key messages or advice would we offer to evaluation managers and commissioners – who are not evaluation specialists – about what it takes to get a good evaluation?  (Five is an arbitrary number intended to provide focus; there may well be more than five.)

Okay, so here’s a first stab at a core five evaluation components:

  1. Key evaluation questions that are explicitly evaluative.
  2. An appropriate match of methods and methodology to address the key evaluation questions.
  3. An evaluation framework that provides a clear and transparent method and process for drawing evidence based evaluative conclusions.
  4. The alignment of data collection to the key evaluation questions
  5. Clear communication of evaluation findings

All of the planning, design, implementation etc tasks and activities are important to achieving a ‘good’ evaluation.  The attempt here is to identify some core components which we can suggest that commissioners focus on to improve the likelihood of getting a good evaluation.

I’ve suggested five core components (a fuller discussion below follows), and the invitation now is to comment on:

  1. What are the five core messages for commissioners of evaluation to get to a good evaluation? Are there more or less than five core messages?
  2. What’s the value, if any, of this type of approach to getting a ‘good’ evaluation.

1. Key evaluation questions that are explicitly evaluative.

That is questions, which specifically ask about the quality, value or importance of the evaluand or some aspect of it (Davidson, 2005).  I briefly touch on the limitations of research questions as a poor substitute for explicitly evaluation questions in the earlier post comment.  Some commissioners of evaluation will need some help in developing or refining their key evaluation questions (see Davidson 2009 AEA presentation).

2. Appropriate match of methods and methodology to address the key evaluation questions.

The aim here is to ensure that the proposed methods and methodology address/connect to the key evaluation questions. For example commissioners might require: (1) evaluation tenders or bids to discuss the strengths and limitations of their proposed methods/approach in addressing the key evaluation questions; (2) an evaluation plan as the first deliverable of any contract that clearly demonstrates how the methods and methodology address/connect to the key evaluation questions.

3. An evaluation framework (including an analysis and synthesis methodology) which provides a clear and transparent method and process for drawing evidence-based evaluative conclusions.

I know from my own practice whilst making data-driven judgments against evaluative criteria hasn’t posed a problem,  being clear and transparent about the basis for these determinations – including the values, assumptions and preferences prioritized in this process – has been an emergent practice.  So, we might suggest to commissioners that they seek specific feedback in tender documents and evaluation plans about how evaluators will  develop a framework for making judgments and drawing evaluative conclusions. I find Jane’s work (Davidson, 2005) particularly useful in this respect.

4. The alignment of data collection to the key evaluation questions

It is at the data collection stage the reality of what was envisaged in the evaluation plan and what is actually feasible and affordable ‘in the field’ surfaces.  It can be a particular point of vulnerability as time and budget pressures typically loom large.  In particular, commissioners need to (1) keep a ‘close eye’ on the data collection tools for their alignment to the key evaluation questions; and (2) if methods are to be scaled back, changed or discarded those changes are made knowing full well the implication on the overall evaluation and the ability to draw evaluative conclusions.  In my experience, it is at this stage that commissioners are particularly vulnerable to ‘dropping the ball’ – not understanding the importance of these activities, nor the potential impact on the evaluation.

5. Clear communication of evaluation findings

Written evaluation reports continue to be the common method of reporting evaluation findings (despite the increased range of communication options available).  The normal conventions of report writing apply to evaluation reports; for example, the report is written in a clear and easily readable style and there is a logical sequence and presentation of findings. Other  reporting elements specifically applicable to an evaluation report might include: (1) clearly evidences the data and findings to conclusions drawn; (2) takes account of and critically explores various likely alternatives; and (3) draws explicitly evaluative conclusions and makes transparent the basis on which judgments have been made.

The evaluation report is the sum of the evaluation effort rendered to a report, and it goes without saying that it needs to be done well.

4 comments to Don’t drop the ball: Five key messages for getting to a ‘good’ evaluation

  • Robyn Bailey

    Kia ora Nan.

    Thank you for the above, especially fleshing out five components more likely to contribute to producing a ‘good evaluation’ and taking the conversation forward positively and productively. I’m engaging with your 2nd question about the ‘value’ of this type of approach. And yes I think there is value in articulating clearly and accessibly the core components of a ‘good evaluation’ for commissioners. It may take some while to get agreement within the evaluation community :)

    On reading this, what occurred to me was whether a commissioner would ‘understand’ or had the skill to know if they were getting the above during the RFP and/or evaluation planning process. This is not a question about the intelligence of the commissioners, rather a comment that the concepts in both the headliners and the descriptions are potentially challenging to understand (and do, even amongst evaluators where they are done to a variable quality). I know both you and Jane have expressed commissioners certainly know (and feel frustration) when they receive an evaluation that doesn’t deliver an ‘evaluation’.

    So, I’m wondering whether there is a further need to complement the above with commissioners being able to access evaluative expertise to advise them? Some commissioners have evaluation staff, sometimes this occurs because the contractor has this expertise and works through what’s needed to produce a ‘good evaluation’ with the commissioner … but what’s not happening or missing in the scenarios that you’re familiar with? Is this part of the puzzle, or are there other factors also at play here (beyond advice to commissioners and commissioners having expertise to advise them)?

    Looking forward to hearing more from you over your ‘guest’ week!

    Kind regards … Robyn Bailey

  • The commissioners are an important piece of the evaluation puzzle who are often ignored in evaluation capacity building. We had an interesting request from one of our regional offices that we had trouble understanding at first. But as we dug, we discovered that what they wanted in evaluation capacity building was not to learn tools and methods but rather for those who commission and those who process evaluations (administratively as well), was a better understanding of the evaluation process and how to know what to ask for, how to know who to commission, and how to know if they are getting what they ask for. So we are looking into developing some materials for that purpose – if anyone has any suggestions on where to look, these would be welcome!

  • Nan Wehipeihana

    Hi Robyn. I take your point that simply developing a list of key messages/advice for commissioners (with little or no evaluation expertise) to use, does not mean that commissioners will: (1) know what a ‘quality’ response to the question/s are; or (2) be able to respond, in an informed and knowing way, to what the evaluator/s put forward in response to these questions. So the five key messages/questions, as an isolated tool for a non-evaluation audience, has some obvious limitations.

    My thinking around the five core messages was I firstly had an expectation, perhaps naively so, that the evaluators prompted and/or unprompted would explain the relative strength, weaknesses, limitations etc of their responses to each of these questions, as they would any other question, that a commissioner might ask. Indeed, I believe they have an ethical obligation to do so, given that they present themselves as having the necessary evaluation expertise to carry out an evaluation.

    Secondly, I felt it was useful for commissioners to know that there are 5 (or however many) critical evaluation tasks or components that they need to keep a sharp eye on i.e. to not drop the ball on.

    Commissioners might not be able to ‘interrogate’ in-depth what has been proposed, but again it would not be unreasonable of them to expect the evaluator to provide that advice

    I agree that it would be incredibly valuable if commissioners had access to evaluation advice and expertise to help them negotiate key stages of the evaluation process if not throughout the entire evaluation. However, in my experience this is not the case for many commissioners of evaluation.

    The extent to which this happens is highly variable and at times fluctuates or is available at different stages of the evaluation (or not) depending on organizational resource, evaluation budget, and the relative priority of the evaluation project to other evaluations (and/or to other projects/research).

    So such a ‘tool’ provides a bare bones approach for commissioners with limited or no evaluation experience, and probably is of more value when supported by evaluation expertise, if this is readily available and accessible.

  • Nan Wehipeihana

    Kia ora Fred and Ricardo (see ).

    Fred, I think you’ve got to the heart of the matter. Commissioners as you rightly point out (for the most part) don’t want to be evaluators . What they want:

    is a better understanding of the overall evaluation process
    to know what to ask for
    to know if they are getting what they ask for; and
    to know who to commission (and how to select a good evaluator/evaluation team)

    Robyn’s response caused me to reflect on my experience in the use of tools and to recall that evaluation tools are a good start, but a ‘tools only approach’ is only part of the answer. Generally commissioners (as learners) don’t know how to assess whether the response to any particular, question, task or activity is an appropriate or a quality response and to take action based on this response from an informed, knowing and confident base. Ideally tools would be accompanied by some form of evaluation support and or mentoring.

    I liken this to a trip I took to Argentina and Chile last year. I learnt some conversational phrases from a phrase book (the tool) and I could ask simple questions like, how was your day, where will we go next (implementation of first tasks from the tool). However, when the response came back, I sometimes could barely understand the response (hand gestures and context aside) and responded in garbled and stilted phrases (the next step) hoping that I was getting it about right, but not really sure. For the duration of the trip, I was heavily reliant on my daughter (the mentor) to translate and provide the words and phrases to

    As you point out Fred it is the conversations and practice reflection on the applications of tools or the completion of various tasks that is of most value and where learning becomes conscious – as the conversations and reflection grounds and contexts the learning in a practical and applied sense, relative to a particular project.

    I have mainly used face-to-face methods when implementing ECB. For example (1) centralized workshops or training with ongoing evaluation support/mentoring provided by local evaluators; (2) working with a single organizations providing ongoing ECB through small focused learning sessions (on a particular area of evaluation); then providing ongoing support e.g. feedback on how a particular task has been carried out.

    It would be good to find out how technology has been used to support ECB at a community level. e.g. online discussion forums, blogs, skype, video conferencing and how effective these have been.

    I don’t know of any specific tools, although I have asked a couple of colleagues who do a lot of this type a work to post any references here or to email to me. However one resource that you might find particularly useful, for explaining the importance of key evaluation questions is a presentation by Jane Davidson – Improving evaluation questions and answers: getting actionable answers for real-world decision makers – which she presented at AEA in 2009. ( See )

    Users and Intended Use – a critical first question

    Hi Ricardo. I can’t believe I didn’t start with a key message or question around users and intended use (duh!). I think it needs to be the first question and not a pre-question that sits out to the side because:

    (1) it will help tease out clarity of purpose and intended use. This can be quite difficult for the commissioners (and evaluators alike) particularly when attempting to balance the needs of multiple end users;
    (2) the key questions and approach and methods etc need to be aligned to intent, purpose and utility;
    (3) if it sits out to the side it has the potential to drop off as a key message.

    So with this as the first key message/question and wanting to stick to five what do we drop or combine? One option is to look at combining messages, perhaps messages 2 and 4. Whilst messages 2 and 4 are particularly close, one had a focus on the planning and the design end and the other a focus on checking at the implementation/data collection phase alignment between the two. Or maybe there should be six key messages?