A fleshed out ‘program logic’ for why and where ‘insiders’ are included in evaluation

**Revised and updated, March 4th** Here’s an elaborated version of the table presented in our earlier post that discussed the implicit reasons for creating evaluation teams with cultural (and other) ‘insiders’ in different proportions and in different roles. The table is presented in four columns: * Implicit “Problem” or “Challenge” Addressed * Insider Inclusion Rationale * Likely Practice Implication (How ‘Insiders’ Are Involved) * Likely Evaluation ‘Product’

Read the whole post –> A fleshed out ‘program logic’ for why and where ‘insiders’ are included in evaluation

Who’s responsible for non-genuine evaluation?

When evaluations turn out to be major disappointments, there’s often a pervasive assumption that it is ALL the evaluator’s fault. In my view, at least as much responsibility rests on the client side. I have seen entire “evaluation” reports with absolutely no evaluation questions, and sometimes with not even a research question in sight! Where on earth was the client when the discussions were being had about what they needed to know?

Read the whole post –> Who’s responsible for non-genuine evaluation?

Why genuine evaluation must be value-based

Every now and then the question is raised about whether evaluation really needs to incorporate “values”. Can’t we just measure what needs to be measured, talk to the right people, pass on whatever they (the people and the data) say? Why is there a need to dig into the messiness of “value”? Do we really need to say anything about how “substantial” or “valuable” an outcome is in the scheme of things, whether an identified weakness is minor or serious, whether implementation has been botched or aced, whether the entire program is heinously expensive or an incredible bargain given what it achieves? Do we really need to do anything seriously evaluative in our evaluation work? Yes, we do. And here’s why …

Read the whole post –> Why genuine evaluation must be value-based