Monitoring and evaluation: Let’s get crystal clear on the difference

I often see the terms monitoring and evaluation used in the same breath, and have heard many comment that M&E is usually much more M than E.

It seems to me that the lack of a clear distinction between the two means that evaluation is getting shortchanged.

So, what is the difference?

Monitoring and evaluation ask and answer very different kinds of questions – and therefore need different methodologies to generate the answers to those questions.

Coverage Monitoring Question Examples Evaluation Question Examples
Outputs (Products, Services, Deliverables, Reach) How many people or communities were reached or served?Were the targeted numbers reached?  How adequate was program reach?Did we reach enough people? Did we reach the right people?
Process (Design & Implementation) How was the program implemented?Was implementation in accordance with design and specifications? How well was the program implemented?
Fairly, ethically, legally, culturally appropriately, professionally, efficiently?For outreach, did we use the best avenues and methods we could have?How well did we access hard-to-reach and vulnerable populations?Did we reach those with the greatest need?

Who missed out, and was that fair, ethical, just?

Outcomes (things that happen to people or communities) What has changed since (and as a result of) program implementation?How much have outcomes changed relative to targets? How substantial and valuable were the outcomes?How well did they meet the most important needs and help realize the most important aspirations?Should they be considered truly impressive, mediocre, or unacceptably weak?Were they not just statistically significant, but educationally, socially, economically, and practically significant?

Did they make a real difference in people’s lives?

Were the outcomes worth achieving given the effort and investment put into obtaining them?

The need for the program – taken as given, or not?

Another key difference is that the need for the program is generally assumed in monitoring, which basically asks whether a program is on time, on target, and on budget.

In evaluation, it is also part of our job to ask about the need for the program. e.g., “Was the program – and is still – needed? How well does it address the most important root causes? Is it still the right solution?”

After all an on-target program that is no longer needed isn’t a good or worthwhile one. And evaluation, by definition, would say so. With monitoring that larger “question the very existence of the program” element is not generally part of the brief.

Want to know more?

I ran a short webinar recently on how to frame high-level questions to guide an evaluation (and how to distinguish these from monitoring questions). I’m planning to create a tutorial on this topic, so if you’d like to hear when this is coming up, please join my free newsletter. And, coaching is available if you’d like hands-on help to apply it to your own work.

I’m also planning some tutorials on evaluative rubrics, which can be developed independently (by the evaluation team) or collaboratively (with community members, program staff, and/or other stakeholders), and used to interpret quality and value in a systematic and transparent way, for qualitative, quantitative, and mixed method evidence.

For those looking for a low-cost resource to explain these concepts, I have an easy-to-read minibook out that explains the key distinctions, as well as some of the methodologies needed to answer evaluative questions (e.g. not just “what are the outcomes” but “how good are the outcomes”).

It’s called Actionable Evaluation Basics: Getting succinct answers to the most important questions [minibook]. Check the reviews on Amazon to see what other people thought – and then add your own review!

This minibook is available in paperback (from CreateSpace), or as a Kindle format ebook on Amazon (which you can read on a PC, Mac, Smartphone or Kindle)

… or on Smashwords (if you want a PDF version to print, or ePub format, etc).

The Spanish version is also available on Amazon as an ebook (with many thanks to Pablo Rodriguez-Bilella for the translation work!). Please help guide other potential buyers by writing a thoughtful review of it in Spanish – much appreciated! :)

UPDATE: French version coming soon (Feb 2014): Les essentiels de l’évaluation tournée vers l’action: Obtenir des réponses succinctes aux questions les plus importantes. To be first to hear when it’s published (as a paperback and ebook), join my free newsletter.

 

15 comments to Monitoring and evaluation: Let’s get crystal clear on the difference

  • Satish K. Bhalla

    Dear Patricia and Jane

    Congrats on bringing out the differences between M&E.We use these jargons so very often, like inseparable twins, without going into the finer details as to how they fit into the conceptual framework underlying. That is where I find your contribution a valuable piece of learning. Thanks a lot for sharing your thoughts with fraternity.

    Satish

  • Oscar Gonzalez-Hernandez

    I have been working since 1986 in Evaluation. It has become a fashionable subject and the confusion goes much beyond the difference between Monitoring and Evaluation.

    I learned the system from the USaid which were the first to use the triad Design+Monitoring+ Evaluation based on the logical framework approach, which derived from the old MBO.

    I participated recently in “evaluation” exercises one with GEF ( is was a process study) and the other with the EU ( its was en economic benefit analysis)!

    Furthermore, a lot of evaluation reports find quickly their way into the archives. It addition to doing the right stuff much is needed in follow-up work.

    Quo Vadis Evaluation?

  • Good monitoring output and result provide the basis for good evaluation exercise. Evaluation tends to become one time spot shot, and because of this, evaluation is often incomplete. The good monitoring records can supplement the evaluation practice. Thus, the value of the monitoring is important.

  • Thanks, all, for your comments!

    Kimihiro, you are so right that much evaluation is weakened without really solid monitoring feeding into it.

    In my work I am reminded constantly how good monitoring is a critical foundation, and it can go badly wrong when it’s not realized that genuine expertise is needed to get the monitoring right early on.

    I was looking at just such a case this week, with a program that looks like it’s adding amazing value, but the monitoring was so poorly thought out that it’s incredibly difficult to answer the important evaluative questions they now realize they have to ask.

  • Alexis Orhacumya

    Hi

    I would like receive when possible your comments about monitoring and evaluation

    Alex

  • Daniel Ticehurst

    Thanks Jane. People and organisations often confuse and confound the two functions. Moreover, interest and attention in monitoring is overshadowed by the pre-occupation with evaluation. It is as if the challenges of evaluating development efforts are more involved and complex than helping these make a difference – the bottom line purpose of monitoring is to improve, not just comment and report on, performance. Rick Davies, who encouraged me with a few others last year to write up more on this, has just posted my thoughts on his website: http://www.mande.co.uk. It’s the first posting. Regards, Daniel (and thanks for your website by the way – i dip in and out of it regularly.

  • Cormac Quinn

    Jane,

    As I mentioned on Twitter, I found this table to be very useful. My colleagues in DFID have problems understanding and stating clearly the difference between monitoring and evaluation questions in their work. I will use this table in future as a way to brief others on the differences. Thanks again.

    Cormac

  • Daniel Ticehurst

    I forgot to ask you one thing? You say the need for the program is generally assumed in monitoring, which basically asks whether a program is on time, on target, and on budget. Really? Programs simply plod on for x years blindly complying with a actiona nd change theory incarnate that leaves alone the quality and relevance of the support, associatred assumptions unchecked and waits for an evaluation to assess whether it’s needed? Your table begs some questions therefore.

  • Matt Galen

    Hi Jane,

    I realize I’m coming to this quite some time after you posted, but this was so helpful – I had to say thank you. I have been looking for a clear explanation of the distinction between M&E that would be easy to explain to clients. Yours is the best I’ve seen. Great stuff and much appreciated.

  • Thank you so much, Matt, Cormac, & Daniel! Glad you found it useful.

    Daniel, excellent point to raise, and my apologies for not responding to it earlier.

    I totally agree with you that good monitoring is the base requirement; without it, evaluation that drops in infrequently often ends up being (as we say here in New Zealand) about as much use as an ashtray on a motorbike. :)

    Evaluation can certainly be used for improvement (i.e. formative) purposes, not just to comment on performance.

    To me, the very best monitoring systems have an evaluative component that asks not just “are we on track” but “is this good enough?” – including some of that stepping back and making sure we are doing the right thing, not just doing things according to plan.

    It’s no accident that M&E are talked about in the same breath. They are both needed and should go hand in hand.

    Evaluation is so much more useful and insightful when it can add a new layer on top of an already solid monitoring system. And a monitoring system with an evaluative twist is all the more powerful as a management and improvement tool.

  • Daniel Ticehurst

    Dear Jane,

    Many thamks for getting back and hope the Evaluation Conference in Brisbane went well. Based on the programme’s rich variety, I’m sure it did. What you call the evaluative component of monitoring is really essentially: what you find out from this naturally feeds into proceses that help you adapt to what you do and spend money on as you say. On something completely different i have just seen the Sapphires with my daughter – brilliant film. You seen it?

  • NSENGIYUMVA Celestin

    Monitoring and Evaluation complements one another, however they differ in their scope, methodology and purpose. While monitoring answer the question:” Are we doing the things right?”, evaluation seeks to answer:” Are we doing the right things”?.

  • Dr.Raziq Asar.
    February, 7,20014 at 11;20 am.
    To me one of the biggest differences between monitoring and evaluation is:
    Monitoring is looking forward and evaluation is looking backward. For example, presume, we have a patient in a hospital with critical condition. Although, all doctors did their best if cure him, unfortunately he died after five days staying there. Checking the patient condition in a daily basis, and adjusting treatment plan accordingly, was monitoring. On the contrary, now we are investigating to find the reason why he died, like doing autopsy, reviewing his disease history and treatment plan, is an evaluation.
    Thanks
    Dr. Raziq Asar/ Kabul- Afghanistan.

  • Sayed Najeb Amiri

    Dear Jane,

    thanks for your hand out, as i read this, it is very useful, so i wrote my e-mail adress and need your more guidance about M/E
    thanks
    Sayed Najeb

    write it with a little bet easy words because English is the third langauge for me

  • Clement

    Thanks for a very educative article. I work as an m&e officer and I get daily bombardment of my role in the programme. I go for the most simplistic answer offered by Dr Raziq.this area is widely misunderstood would be glad to be emailed on latest workshops, seminars so that I can grow in scope & breadth of m&e.

    Clement

Leave a Reply

  

  

  

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>