Who are the right-to-know audiences in evaluation?

 

It is tempting to assume that common sense will prevail and those who clearly should be allowed to see evaluation findings will have reasonable access to them.

Not always so – as we heard from one of the interesting keynotes at the recent conference of the Aotearoa New Zealand Evaluation Association.

Samantha Lundon is Chief Executive of the Ideal Success Trust, which works with the extended families of 10-year-old Maori [indigenous] children with potential to do well but who face significant obstacles.

Her engaging presentation gave us a very real insight into how it is to be in the organization or program being evaluated – in Samantha’s case, with not one but three different evaluations having been commissioned by three different funders!

One evaluation of Ideal Success, commissioned by a New Zealand central government agency, had been completed and a report filed three years ago. Samantha, as CE of the evaluated organization, has requested a copy of the report multiple times since then, and it has still not been released to her!

As an evaluator, I find this a truly staggering revelation of how little respect some funders have of program staff and evaluated organizations as right-to-know audiences for evaluation reports. Although Samantha could probably force the release of the report via an Official Information Act request, she shouldn’t have to.

Prominent evaluation theorist Dan Stufflebeam, founder and former director of The Evaluation Center at Western Michigan University, would no doubt be admonishing the evaluators for not having applied his Evaluation Contracts Checklist (available from the well-worth-bookmarking Evaluation Center Checklist Site), which reminds us to identify the right-to-know audiences for the evaluation and negotiate up front that they get access.

Dan’s advice is sound, but how easy is it to follow? And, what if the right-to-know audiences aren’t just the provider, but the program recipients or the community?

A few years ago, I completed an evaluation of a leadership development program, which required me to interview dozens and survey hundreds of senior managers. Many of these managers expressed an interest in seeing the report when it was released, and I had hoped to be able to get back to them with a link to it at the time.

In a move that I now understand is pretty typical (in this country at least), the client organization released the report only to the program providers, but did not go ahead and release it publicly at all (even deciding not to officially finalize the report, keeping it as a “draft”, which presumably made it easier to bury). Although some current program participants were invited to a presentation summarizing the key findings, the report was not made available to them, let alone the general public. As in the case of Samantha’s organization, there is always the option of putting in an Official Information Act request just to see the report, but this could be seen as a potentially career-limiting move for a program recipient whose future may depend on the funding organization’s views of him or her (e.g., as a troublemaker).

I do keep Dan Stufflebeam’s advice in mind when going into these contracts now, but I am also acutely aware that the evaluator is sometimes (often?) a relatively powerless force compared to the funding agency, especially when the funder is very risk-averse with respect to media coverage, for example.

Quite apart from evaluation findings being of interest and importance to all those right-to-know audiences (not to mention the general public), this is one reason why it’s hard to share examples of our most methodologically interesting work with evaluator colleagues around the world.

I’m interested in how other evaluators around the world might have successfully navigated this issue in the face of similar political forces. If you have a story or some advice to share, please access the post page and contribute a comment!

Samantha graciously agreed to an audio of her keynote address to be made available to anzea members, so I am posting it here for them, and others who may be interested (with apologies for not-brilliant quality). [If you can’t see the audio play button, please go to the post page and access it there.]

Listen to Samantha Lundon’s anzea 2012 keynote (or right-click to download it for later)

[Those listeners not from New Zealand should be aware that, as is traditional here, Samantha introduces herself in Maori at the beginning, but her presentation is in English, so just hold for that!]

3 comments to Who are the right-to-know audiences in evaluation?

  • Vidhya Shanker

    I don’t have a solution, I’m afraid, but at the risk of appearing to self-promote: I am co-presenting 2 back-to-back interactive sessions that raise for discussion the power dynamics among funders, evaluators, and those organizing for social/ racial/ economic/ gender justice precisely because my colleagues and I would like to deconstruct and re-imagine the system as it currently is in place. All are welcome Thursday (10/25, The Revolution Will Not Be Culturally Competent) afternoon from 1-4pm in 205B–together we can revolutionize rather than simply reform!

  • Tinashe Mujuru

    It is always important to note that stakeholders are key players when it comes to result based programming. Therefore i personally think that all beneficiaries direct and indirect should be given that priority. Although donors want to know the effectiveness of their funding……there is also need to inform the people who were being affected by the problem being addressed so that they feel involved and would also ensure coorperation…..and buy in in future initiatives…..

  • Hi Jane, I have had at least two major evaluation reports which have been buried for years which I could not get people to release, they weren’t at a program level but at a more systems level. Decision-makers often want information for themselves but naturally given the politics of decision-making they are reluctant to release it. I think that we need to have a more structured and institutionally independent approach to evaluation than we have now.