In this age of “evidence-based” this and “data-based” that, it’s worth remembering that sometimes the foundations on which the evidence rests can be far flimsier than it might seem when all the data is loaded nicely into the data analysis package …
We found a couple of classics in this vein, submitted to EVALTALK
Read the whole post –> The Friday Funny: Data as “the truth”?
Sometimes it’s helpful to see the process of evaluation from the point of view of those providing data. Behind the satire are some interesting observations about validity, standardization, and rigor.
This week’s Friday Funny comes from the BBC Comedy Lab Rats.
So you’ve put out an RFP for an evaluation of a policy, program or initiative intended to serve and effect positive change in a “minority” community. All the proposals look terribly impressive, and they all include “cultural experts” on the evaluation team. How can you distinguish the proposals that show a clear understanding of what it takes to do effective and culturally responsive evaluations from those that merely pay ‘lip service’ to cultural competence?
Read the whole post –> How to spot a ‘lip service’ approach to culturally responsive evaluation (a checklist for evaluation clients)
**Revised and updated, March 4th** Here’s an elaborated version of the table presented in our earlier post that discussed the implicit reasons for creating evaluation teams with cultural (and other) ‘insiders’ in different proportions and in different roles. The table is presented in four columns: * Implicit “Problem” or “Challenge” Addressed * Insider Inclusion Rationale * Likely Practice Implication (How ‘Insiders’ Are Involved) * Likely Evaluation ‘Product’
Read the whole post –> A fleshed out ‘program logic’ for why and where ‘insiders’ are included in evaluation