Recently I was asked to review an evaluation report that the client expressed disappointment in because the report did not adequately address the key evaluation questions. (See http://genuineevaluation.com/whos-responsible-for-un-genuine-evaluation/comment-page-1/#comment-23 ).
Whilst it was possible to make some recommendations to improve the overall presentation of findings, any suggestions were limited because the evaluation approach, methodology and data collection had been completed – and these core components could not be revisited.
A simple logic might be –> disappointing evaluation report –> implement report review suggestions –> improved evaluation report –> improved evaluation. Of course an evident flaw in this logic is that an improved evaluation report does not necessarily equate to an improved evaluation and ‘tinkering’ with the output of the evaluation effort – the evaluation report – will in many cases be too little, too late.
This raised for me the bigger question of ‘what does it take to get a ‘good’ evaluation?’ Focusing on the discrete components of an evaluation is of course valuable, but in reality there is need for each of the core evaluation components, tasks or activities all to be done well.
So what does it take to get a ‘good’ evaluation?
This is not an easy question to answer and there are many textbooks and evaluation checklists dedicated to achieving this very goal. They are often written for evaluation practitioners, and therein lies part of the problem.
Whilst not wanting to trivialize the diversity and complexity of the practice of evaluation, I got to thinking: What four or five key messages or advice would we offer to evaluation managers and commissioners – who are not evaluation specialists – about what it takes to get a good evaluation? (Five is an arbitrary number intended to provide focus; there may well be more than five.)
Okay, so here’s a first stab at a core five evaluation components:
- Key evaluation questions that are explicitly evaluative.
- An appropriate match of methods and methodology to address the key evaluation questions.
- An evaluation framework that provides a clear and transparent method and process for drawing evidence based evaluative conclusions.
- The alignment of data collection to the key evaluation questions
- Clear communication of evaluation findings
All of the planning, design, implementation etc tasks and activities are important to achieving a ‘good’ evaluation. The attempt here is to identify some core components which we can suggest that commissioners focus on to improve the likelihood of getting a good evaluation.
I’ve suggested five core components (a fuller discussion below follows), and the invitation now is to comment on:
- What are the five core messages for commissioners of evaluation to get to a good evaluation? Are there more or less than five core messages?
- What’s the value, if any, of this type of approach to getting a ‘good’ evaluation.
1. Key evaluation questions that are explicitly evaluative.
That is questions, which specifically ask about the quality, value or importance of the evaluand or some aspect of it (Davidson, 2005). I briefly touch on the limitations of research questions as a poor substitute for explicitly evaluation questions in the earlier post comment. Some commissioners of evaluation will need some help in developing or refining their key evaluation questions (see Davidson 2009 AEA presentation).
2. Appropriate match of methods and methodology to address the key evaluation questions.
The aim here is to ensure that the proposed methods and methodology address/connect to the key evaluation questions. For example commissioners might require: (1) evaluation tenders or bids to discuss the strengths and limitations of their proposed methods/approach in addressing the key evaluation questions; (2) an evaluation plan as the first deliverable of any contract that clearly demonstrates how the methods and methodology address/connect to the key evaluation questions.
3. An evaluation framework (including an analysis and synthesis methodology) which provides a clear and transparent method and process for drawing evidence-based evaluative conclusions.
I know from my own practice whilst making data-driven judgments against evaluative criteria hasn’t posed a problem, being clear and transparent about the basis for these determinations – including the values, assumptions and preferences prioritized in this process – has been an emergent practice. So, we might suggest to commissioners that they seek specific feedback in tender documents and evaluation plans about how evaluators will develop a framework for making judgments and drawing evaluative conclusions. I find Jane’s work (Davidson, 2005) particularly useful in this respect.
4. The alignment of data collection to the key evaluation questions
It is at the data collection stage the reality of what was envisaged in the evaluation plan and what is actually feasible and affordable ‘in the field’ surfaces. It can be a particular point of vulnerability as time and budget pressures typically loom large. In particular, commissioners need to (1) keep a ‘close eye’ on the data collection tools for their alignment to the key evaluation questions; and (2) if methods are to be scaled back, changed or discarded those changes are made knowing full well the implication on the overall evaluation and the ability to draw evaluative conclusions. In my experience, it is at this stage that commissioners are particularly vulnerable to ‘dropping the ball’ – not understanding the importance of these activities, nor the potential impact on the evaluation.
5. Clear communication of evaluation findings
Written evaluation reports continue to be the common method of reporting evaluation findings (despite the increased range of communication options available). The normal conventions of report writing apply to evaluation reports; for example, the report is written in a clear and easily readable style and there is a logical sequence and presentation of findings. Other reporting elements specifically applicable to an evaluation report might include: (1) clearly evidences the data and findings to conclusions drawn; (2) takes account of and critically explores various likely alternatives; and (3) draws explicitly evaluative conclusions and makes transparent the basis on which judgments have been made.
The evaluation report is the sum of the evaluation effort rendered to a report, and it goes without saying that it needs to be done well.