Genuine Evaluation, Real Evaluation, Better Evaluation, Actionable Evaluation …

Recently we’ve noticed that, not surprisingly, some people have been confused about the difference between genuine, real, and better evaluation.  Let’s clear that up.

Genuine Evaluation

The blog you’re currently reading and a set of principles that underpin our work. And maybe one day a book.

Jointly run by Jane Davidson and Patricia Rogers since January 2010, it’s where we share our ideas about genuine, authentic, practical evaluation – what it looks like, good examples, bad examples, lessons learned, tips, useful practices and methodologies.

We blog here intermittently when we want to share ideas and discuss issues, examples and events. Frequent Friday Funny drawing connections between jokes and evaluation ideas.

We also have a Genuine Evaluation jingle thanks to Kataraina Pipi, Nan Wehipeihana, and Kate McKegg who wrote this with us and performed it at the 2010 American Evaluation Association conference. (You can sing along here)

We’ve defined Genuine Evaluation in terms of five principles which summarise the type of evaluation we want to do and to support, and the types we want to hold up for critique and as cautionary tales:

  1. VALUE-BASED -transparent and defensible values (criteria of merit and worth and standards of performance)
  2. EMPIRICAL – credible evidence about what has happened and what has caused this,
  3. USABLE – reported in such a way that it can be understood and used by those who can and should use it (which doesn’t necessarily mean it’s used or used well, of course)
  4. SINCERE – a commitment by those commissioning evaluation to respond to information about both success and failure (those doing evaluation can influence this but not control it)
  5. HUMBLE – acknowledges its limitations

This does not restrict ‘genuine evaluation’ to any particular type of method or research design – experimental, quasi-experimental and non-experimental designs can all be appropriate in particular circumstances. But it does exclude:

  • evaluations that don’t get to the point of making a judgment about something being good or bad, better or worse
  • evaluations that uncritically accept stated objectives as the only evaluative criteria, leaving out any unintended impacts
  • evaluations that focus only on the average effect, ignoring or being unaware of differential effects where a program might be effective on average but actually harmful in some circumstances
  • evaluations that are based on inadequate data – for example only using summarized participant responses as evidence of what has happened and how it has happened
  • evaluations that are ignored, buried or censored.

These principles stand us in good stead when we are planning evaluations and we use them in our other projects which are badged in different ways.

Real Evaluation

Real Eval logo finalA New Zealand-based, world-class evaluation consulting, capacity building and professional development organization.

Founded in 1997 by director Dr. E. Jane Davidson (and formerly known as Davidson Consulting Ltd), Real Evaluation works with organizations to ensure evaluations ask important, big picture questions about quality, value and importance – and provide clear, concise, direct answers to those questions.

When important outcomes are intangible or hard to measure, it focuses on providing approximate answer to important questions, roughly whether an intervention provides value for money and effort,  instead of a precise answer to something trivial.

Real Evaluation works independently or in collaboration with some of the world’s best evaluators to offer a range of specialist evaluation products, services and resources to organizations in New Zealand and around the globe.

BetterEvaluation

logo (2)An  international collaborative project to improve evaluation practice and theory by sharing information about evaluation options (methods, strategies and tools).

Established with financial support from the Rockefeller Foundation, AusAID, IFAD, the Netherlands Ministry of Foreign Affairs, and founding partners RMIT University, Pact, ILAC and ODI, its website, which went live in October  2012, provides a framework for planning evaluations (either as an evaluator or as a manager of evaluation) in the form of 32 tasks, grouped into 7 clusters.  It has information on more than 200 different evaluation options.

The site is supported by contributions from over 1,000 individual and organisational members and other supporters.

A series of 8 webinars was presented in partnership with the American Evaluation Association featuring contributions from Irene Guijt, Simon Hearn, and Kerry Bruce, as well as by Patricia Rogers and Jane Davidson.

Actionable Evaluation: Getting succinct answers to the most important questions

actionable evaluation coverA low-cost e-book – and now print book! – available to read on Kindle, PC, Mac, tablet or smart phone.

54 pages of advice by Jane Davidson to help you avoid getting lost in indicators, measures, and analysis methods. You can use this guide to get clear, well-reasoned, insightful answers to your most important questions about quality and value.

The book is focused on six key elements of actionable evaluation: clear purpose; the right stakeholder engagement strategy; important, big picture questions; well-evidenced, well-reasoned answers; succinct, straight-to-the-point reporting; actionable insights.

Available as a paperback, printable PDF, Kindle, or in other ebook formats.

 

 

 

 

 

 

All clear now?  Looking forward to catching up with old and new friends at the American Evaluation Association conference in Washington DC next week – where we will be working on genuine, real, better and actionable evaluation.

Leave a Reply

  

  

  

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>