I often see the terms monitoring and evaluation used in the same breath, and have heard many comment that M&E is usually much more M than E.
It seems to me that the lack of a clear distinction between the two means that evaluation is getting shortchanged.
So, what is the difference?
Read the whole post –> Monitoring and evaluation: Let’s get crystal clear on the difference
No baseline? Not ideal, but there are ways around it. A few tips …
The first places I go are (a) the people who were around early on when the program or policy was launched and (b) the documentation that got it approved and funded.
Somewhere in there you will find “why we needed
Read the whole post –> No baseline? A few tips
Jane Davidson interviews Drs. Tererai Trent, Mary Crave, & Kerry Zaleski about their forthcoming AEA workshop: Reality Counts: Participatory methods for engaging vulnerable and under-represented persons in monitoring and evaluation. The approach and methods go beyond funder-driven indicators and focus on “whose reality counts” – capturing community and participant values to help define what a “valuable outcome” or a “good solution” would look like in their reality.
Read the whole post –> Reality Counts: Hot new AEA workshop on participatory M&E with vulnerable populations
Developing good performance indicators is not easy. The history of their use is littered with examples of how they can produce a distorted picture of performance and provide dysfunctional incentives. Burt Perrin’s report to the OECD (Organization for Economic Co-operation and Development) Implementing the vision – addressing challenges to results-focused management
Read the whole post –> Punished for productivity – poor use of an average in performance evaluation