Who’s responsible for non-genuine evaluation?

When evaluations turn out to be – as we so quaintly say here in Aotearoa New Zealand – “about as much use as an ashtray on a motorbike”, there’s often a pervasive assumption that it is ALL the evaluator’s fault.

In my view, at least as much responsibility rests on the client side. I have seen entire “evaluation” reports with absolutely no evaluation questions, and sometimes with not even a research question in sight! And there, looking at the report, is a client wondering why the evaluation really didn’t produce any answers. Well, no questions => no answers!

Where on earth was the client when the discussions were being had about what they needed to know from the evaluation, i.e. formulating the overarching evaluation questions? OK, the evaluator should get in there and facilitate the development of these questions, but if they don’t, then surely the person paying for the evaluation is responsible to ensure the organization’s needs are going to be met and that the money spent on evaluation isn’t just a waste?

I also think that often clients have, sadly, never actually SEEN a genuine evaluation, so wouldn’t know how to ask for one, and think that the steady stream of bean counting and stories reports are about what you can expect from evaluation.

I do try and devote time to educating clients – not just my clients, but more widely too – about what evaluation CAN do for them, what the possibilities are, what they can ask for and actually get if they play their cards (and spend their evaluation budgets) right.

Genuine evaluation is as much dependent on clients with a clue as it is on evaluators with a clue.

2 comments to Who’s responsible for non-genuine evaluation?

  • Nan Wehipeihana

    Hi Jane

    Genuine evaluation is as much dependent on clients with a clue as it is on evaluators with a clue.

    As a general principle I don’t disagree, but I also think that the responsibility for genuine evaluation falls differentially depending on the extent to which clients in particular ‘have a clue’ about evaluation.

    Scenario 1: Recently I was asked to review an evaluation report that the client expressed disappointment in because the report did not adequately address the key evaluation questions. This disappointment was further compounded by an initial draft report that was overly lengthy, lacking in clarity and where some conclusions reached were not strongly linked to the evidence/findings.

    The client was seeking a process to maximise the utility of the evaluation findings and for the report to better take account of the organisational information needs.

    The two drafts of the report had been subject to extensive internal feedback by the client and to an independent peer review, commissioned by the client. By the time I was also asked to review the evaluation report, a period of nine months had elapsed, and it had not been signed off/accepted by the client.

    Some project background – The evaluation contract was awarded through a competitive tender process. The project documentation reviewed suggested the client enjoyed a good working relationship with the evaluators, worked closely with them; and agreed to/signed off on the methodology, key evaluation questions and changes to the evaluation design.

    On the one hand, during these key project stages that the client should have picked up any issues of mismatch between the design and the evaluation questions, so there is a level of culpability attributable to the client in terms of the outcome of the evaluation and final evaluation report. This speaks to the level of evaluation knowledge within the client organisation to make ‘good’ decisions.

    On the other hand, the evaluators were contracted to design and undertake an evaluation and presented themselves as having the necessary set of evaluation skills and expertise to do this. In part therefore, this speaks to the not unreasonable expectation on the part of the client to expect that the evaluation would meet their primary information needs and for the evaluators to have been explicit, at key stages in the evaluation, about how well or not well the proposed approach and methods would meet their needs.

    So the issue here was not one of willingness or responsiveness on the part of the evaluators, nor one of lack of client involvement, and good communication operating between both parties. Rather, it was the disappointment of expecting that the evaluation would address the key evaluation questions, and finding out through the reporting process that the extent to which this had not been achieved.

    So in cases, where a client lacks in-depth evaluation knowledge, and hires an evaluator to both conduct the evaluation and provide evaluation advice, responsibility for genuine evaluation seems to me to be strongly ‘tilted’ towards the evaluator.

    Scenario 2: Recently I reviewed an evaluation request for proposal/tender. Despite being called an evaluation, the tender essentially framed the project as a piece of research. For example:
    (1) the ‘evaluation’ questions were research questions i.e. they were descriptive as opposed to being explicitly evaluative; (2)the tender did not document the requirement for evaluation specific approaches or methodology; and (3) the request for proposal did not seek information about how a framework for making judgments and drawing evaluative conclusions would be developed.

    So what we have in a situation like this is ‘evaluation dressed in drag’. That is, outwardly the project is paraded as an evaluation, but underneath the layers it is research.

    Possible flow-on scenarios include:

    (1) It looks like research, it feels like research, despite being called an evaluation, so you get researchers doing research and thinking what they are doing is evaluation. In this scenario evaluation gets a bad rap, when the final report is delivered and it doesn’t answer the questions the client needed answered but actually didn’t ask for.

    (2) Also in this scenario, both the researcher and the client amble on in blissful ignorance perhaps with the odd squawk from someone in the background who knows a little maybe even a lot about evaluation.

    (3) Even if an evaluator gets this work, they are working with a client who thinks that what they have asked for is evaluation – that what they have asked for is what the need. So there’s a huge learning curve for the client and disappointment looms large if we can’t help the client to understand what they asked for is not what they need

    Genuine evaluation in this example is very much dependent on clients with a clue (as it is on evaluators with a clue). Clueless clients are particularly problematic when genuine evaluation is the goal.

  • Liz Riley

    Interesting- the key for me is the purpose of the work. A genuine evaluation should have a purpose beyond a measured description of what happened. It should also consider why it happened, what was changed in the course of the project and make recommendations about what needs to happen next.

    Organisations with a real commitment to making a difference to their client group will want to know how their organisation can work better, as well as how a project could have worked better.

    In both the UK and the developing south I’ve seen evaluation done because we have to (for the funder) not because there is any desire for change. Funding agencies (including the UN agencies and the EU) could do a lot to explain the real value of evaluation to the organisations they support.