“Genuine evaluation” snippets from across the globe

What does the term “genuine evaluation” mean to the rest of the planet, including those who don’t identify as “evaluators”?

We’ve collated a few snippets from our Google Alerts file to give a picture that is sometimes humorous, sometimes actually very insightful. Of particular interest as we refine our thinking are the similar themes in the quotes coming out of product, service and program evaluation.

Snippet & Source Genuine Evaluation Concepts
In his blog post entitled Obama and USAID: the need for genuine evaluation: Ajoy Datta from the Overseas Development Institute highlights some of the tell-tale signs of the non-genuine evaluations that characterised pre-Obama administration evaluations in USAID …

programme evaluation has increasingly focused on the reporting of activities and outputs for budgeting and accountability purposes, rather than changes in welfare of the poor. …

Further, fear that negative evaluations would play into the hands of foreign aid critics in Congress and the State Department has meant that many evaluations have been hidden, limiting the chances of learning from either successes or failures.

In 2007, the USA spent almost $22 billion on development aid … These mammoth amounts often dwarf the national budgets of developing countries. Questions remain though, about whether the money spent achieved the hoped-for changes in people’s lives. What impacts have HIV and AIDS control efforts had on the health of populations, for example? What has changed as a result of democracy and governance assistance? What are the underlying factors that determine success or failure? Is USAID improving its performance as a result of learning?

Genuine evaluations MUST include outcomes, not just activities and outputs.

Negative evaluations need to be used to learn from failures – not buried for political reasons.

Especially when large expenditure is involved, it is absolutely critical to document how people’s lives (and societies and governments, etc) have changed as a result.

Don’t just document the outcomes of “the program”; turn the camera back on the organization delivering the program – are they improving their own performance through learning?

Obviously, in reading a review, you want to analyze it for thoroughness. You wish to obtain a good sense for the product or service with out having to ask too many questions. If you find yourself wondering a lot, then the review is bad.

But basically, what you would desire to have in a review is a genuine evaluation of the service or product and a comprehensive description of the numerous functions of the write-up rewriting system.The review must describe both benefits and disadvantages to offer you with clues on whether or not you’d want to consider the product.

(from a Fast Views blog post about what constitutes a good review – in this case, of software to help add ‘spin’ to articles used for marketing purposes)

Provide a clear description of the evaluand.Make sure you cover all important bases and don’t leave major question dangling in the reader’s mind.

Make sure the evaluation is balanced, describing both benefits and disadvantages so that the reader can work out whether the evaland would be a good fit for a particular individual, organization, or community.

Fascinating how the essence of a topic is ignored and great importance is given to specific details without a genuine evaluation. (from a discussion in the Somalinet online community) Cut to the chase; don’t get lost in details of lesser importance; identify the essence of the issue and focus primarily on that.
For a genuine evaluation, you will need to lay approximately 15 minutes in the bed. Under no circumstances select a mattress just by sitting on it. (from Features of a top-rate adjustable bed mattress) Evaluate from a consumer perspective, and be sure to include an experiential component that closely simulates the actual consumer experience.
… New Jersey’s reporting methods do not allow for a genuine evaluation of its wetlands programs or of the status of wetlands in the state.This is also true of the information provided by Delaware in its 2008 Integrated Report, which allows for no comparison whatsoever.  Delaware reports wetland losses for the years 1981 through 1992, but provides information regarding the location and number of acres restored for the years 2006 through 2008.  Like New Jersey, Delaware’s reporting does not allow for an evaluation of its wetlands program or of the status of wetlands in the state. (from Susan M. Kennedy’s article, Programs That Need Fixing) Important to tell the whole story, including relevant comparison data for the same time periods. Snippets of factually correct information can be misleading.
What I am talking about is a genuine evaluation and dismissal of an event (and related emotions) that has been judged to be inconsequential. “Overlook” the small stuff and save “forgiveness” for the big things… (from the blog of Baptist minister Hixon) Clearly determine what’s important and what’s relatively inconsequential – and make sure this is clear when you present your findings.

3 comments to “Genuine evaluation” snippets from across the globe

  • Irene Guijt

    Some important contrasts presented but also one that doesn’t entirely align – tell the whole story but cut to the chase? Include activities, outputs and outcomes AND identify the issue of essence/determine what’s important? Um? Are both possible at the same time?

    I understand the need for both as I am heavily involved in an M&E system lasting 5 years that requires me absolutely to do and know both. But from the outside, what’s the real message? Emerging from a year of excess detail that has enabled me now to ‘identify the issues of essence’, I would have liked to have had shortcuts 18 months ago to help me prioritize a bit earlier. My capacity limitation? Perhaps. Probably. Surely. But I suspect I’m not the only one facing this choice of opting for less once you know the whole, and simplicity of design on the other side of initially messy trials. Do we learn how to identify the essence? Or do we get given the advice of ‘not nice to know but need to know’ and then need to figure it all out through trial and error?

    A general musing on how evaluation capacity is built and where guidance is given on how to prioritise when dealing with large and diverse efforts (11 countries, 170+ organisation, 5 year, multi donor, multi-level).

  • Patricia Rogers

    Good question! Probably the answer does lie in managing the paradoxes you have identified. So simultaneously focusing on the specific evaluation questions and criteria, as Jane advocated in evaluation reporting, and being open to the messiness and detail of what is happening.

    Being able to do this effectively takes more than one person, which is why I think evaluation needs to be a team sport.

  • Jane Davidson

    Irene, thanks for the question/comment – I actually think this is hugely important, and (as Patricia says) an ongoing paradox management challenge in all evaluation.

    I started a response here but it got a bit long and (I think) interesting, so I will put it up as a separate post early next week (Monday our time) so that others on the feed won’t miss it.