When evaluations turn out to be – as we so quaintly say here in Aotearoa New Zealand – “about as much use as an ashtray on a motorbike”, there’s often a pervasive assumption that it is ALL the evaluator’s fault.
In my view, at least as much responsibility rests on the client side. I have seen entire “evaluation” reports with absolutely no evaluation questions, and sometimes with not even a research question in sight! And there, looking at the report, is a client wondering why the evaluation really didn’t produce any answers. Well, no questions => no answers!
Where on earth was the client when the discussions were being had about what they needed to know from the evaluation, i.e. formulating the overarching evaluation questions? OK, the evaluator should get in there and facilitate the development of these questions, but if they don’t, then surely the person paying for the evaluation is responsible to ensure the organization’s needs are going to be met and that the money spent on evaluation isn’t just a waste?
I also think that often clients have, sadly, never actually SEEN a genuine evaluation, so wouldn’t know how to ask for one, and think that the steady stream of bean counting and stories reports are about what you can expect from evaluation.
I do try and devote time to educating clients – not just my clients, but more widely too – about what evaluation CAN do for them, what the possibilities are, what they can ask for and actually get if they play their cards (and spend their evaluation budgets) right.
Genuine evaluation is as much dependent on clients with a clue as it is on evaluators with a clue.