Ever wondered what the secrets were to awesome workshop facilitation, the kind that gets you exactly the kind of material you need?
Look no further than the hilarious and informative Stuff Expat Aid Workers Like blog! Written for aid workers in international development, it has some hidden gems for evaluators that can be used
Read the whole post –> The Friday Funny: Facipulation
Jane Davidson interviews Drs. Tererai Trent, Mary Crave, & Kerry Zaleski about their forthcoming AEA workshop: Reality Counts: Participatory methods for engaging vulnerable and under-represented persons in monitoring and evaluation. The approach and methods go beyond funder-driven indicators and focus on “whose reality counts” – capturing community and participant values to help define what a “valuable outcome” or a “good solution” would look like in their reality.
Read the whole post –> Reality Counts: Hot new AEA workshop on participatory M&E with vulnerable populations
I’m not sure I can come up with a ‘Copernican’ revolution of the scale Michael Scriven described in his previous post, but perhaps I can run an idea up the flagpole that has came as a realization or light-bulb moment for me and still seems to surprise and sometimes amaze other people I talk to and work with … There is a long-held belief that evaluations that draw explicitly evaluative conclusions are somehow diametrically opposed to or completely incompatible with culturally responsive evaluations that fully reflect and respect the cultural values and worldviews of indigenous peoples and others whose voices are often not heard.
Read the whole post –> Rethinking evaluation: Explicitly evaluative and culturally inclusive approaches
**Revised and updated, March 4th** Here’s an elaborated version of the table presented in our earlier post that discussed the implicit reasons for creating evaluation teams with cultural (and other) ‘insiders’ in different proportions and in different roles. The table is presented in four columns: * Implicit “Problem” or “Challenge” Addressed * Insider Inclusion Rationale * Likely Practice Implication (How ‘Insiders’ Are Involved) * Likely Evaluation ‘Product’
Read the whole post –> A fleshed out ‘program logic’ for why and where ‘insiders’ are included in evaluation
Every now and then the question is raised about whether evaluation really needs to incorporate “values”. Can’t we just measure what needs to be measured, talk to the right people, pass on whatever they (the people and the data) say? Why is there a need to dig into the messiness of “value”? Do we really need to say anything about how “substantial” or “valuable” an outcome is in the scheme of things, whether an identified weakness is minor or serious, whether implementation has been botched or aced, whether the entire program is heinously expensive or an incredible bargain given what it achieves? Do we really need to do anything seriously evaluative in our evaluation work? Yes, we do. And here’s why …
Read the whole post –> Why genuine evaluation must be value-based