Monitoring and evaluation: Let’s get crystal clear on the difference

actionabletitle3forweb

Tweet

I often see the terms monitoring and evaluation used in the same breath, and have heard many comment that M&E is usually much more M than E.

It seems to me that the lack of a clear distinction between the two means that evaluation is getting shortchanged.

So, what is the difference?

Monitoring and

Read the whole post –> Monitoring and evaluation: Let’s get crystal clear on the difference

The Friday Funny: Overly Honest Methods in Evaluation

MP900337312

Tweet The scientists are at it again, confessing how experiments really get done:

“We incubated this for however long lunch was.”

“Experiment was repeated until we had three statistically significant similar results and could discard the outliers”

“Incubation lasted three days because this is how long the undergrad forgot

Read the whole post –> The Friday Funny: Overly Honest Methods in Evaluation

Simplicity and genuine utilization

What’s the relationship between simplicity and genuine utilization of evaluation findings? A recent paper from psychologists Christopher Peterson and Nansook Park considers what kind of psychological research has been the most influential and impactful over the years. Their conclusion: breathtakingly simple. What are the lessons here for evaluation and maximizing genuine utilization of findings?

Read the whole post –> Simplicity and genuine utilization

A fleshed out ‘program logic’ for why and where ‘insiders’ are included in evaluation

**Revised and updated, March 4th** Here’s an elaborated version of the table presented in our earlier post that discussed the implicit reasons for creating evaluation teams with cultural (and other) ‘insiders’ in different proportions and in different roles. The table is presented in four columns: * Implicit “Problem” or “Challenge” Addressed * Insider Inclusion Rationale * Likely Practice Implication (How ‘Insiders’ Are Involved) * Likely Evaluation ‘Product’

Read the whole post –> A fleshed out ‘program logic’ for why and where ‘insiders’ are included in evaluation

Who’s responsible for non-genuine evaluation?

When evaluations turn out to be major disappointments, there’s often a pervasive assumption that it is ALL the evaluator’s fault. In my view, at least as much responsibility rests on the client side. I have seen entire “evaluation” reports with absolutely no evaluation questions, and sometimes with not even a research question in sight! Where on earth was the client when the discussions were being had about what they needed to know?

Read the whole post –> Who’s responsible for non-genuine evaluation?