The Friday Funny: Data as “the truth”?

Tweet

In this age of “evidence-based” this and “data-based” that, it’s worth remembering that sometimes the foundations on which the evidence rests can be far flimsier than it might seem when all the data is loaded nicely into the data analysis package …

We found a couple of classics in this vein, submitted to EVALTALK

Read the whole post –> The Friday Funny: Data as “the truth”?

The Friday Funny – administering a questionnaire

Tweet

Sometimes it’s helpful to see the process of evaluation from the point of view of those providing data. Behind the satire are some interesting observations about validity, standardization, and rigor.

This week’s Friday Funny comes from the BBC Comedy Lab Rats.

How to spot a ‘lip service’ approach to culturally responsive evaluation (a checklist for evaluation clients)

So you’ve put out an RFP for an evaluation of a policy, program or initiative intended to serve and effect positive change in a “minority” community. All the proposals look terribly impressive, and they all include “cultural experts” on the evaluation team. How can you distinguish the proposals that show a clear understanding of what it takes to do effective and culturally responsive evaluations from those that merely pay ‘lip service’ to cultural competence?

Read the whole post –> How to spot a ‘lip service’ approach to culturally responsive evaluation (a checklist for evaluation clients)

A ‘program logic’ for including ‘outsiders’ in evaluation teams

Suppose you are an evaluator looking to put together a team of colleagues to bid on an evaluation of a program that primarily or exclusively targets members of your own ‘culture’ (ethnicity, gender, sexual orientation, life/health/social history, profession or disciplinary roots, etc – yes, everyone is a member of several ‘cultures’). What are the various reasons for including outsiders (people from outside that culture) on your evaluation team? What is the implicit “problem” or “challenge” you would be responding to with that rationale? In what roles would outsiders be involved? How would that influence your evaluation ‘product’ (the services and the report delivered?

Read the whole post –> A ‘program logic’ for including ‘outsiders’ in evaluation teams

A fleshed out ‘program logic’ for why and where ‘insiders’ are included in evaluation

**Revised and updated, March 4th** Here’s an elaborated version of the table presented in our earlier post that discussed the implicit reasons for creating evaluation teams with cultural (and other) ‘insiders’ in different proportions and in different roles. The table is presented in four columns: * Implicit “Problem” or “Challenge” Addressed * Insider Inclusion Rationale * Likely Practice Implication (How ‘Insiders’ Are Involved) * Likely Evaluation ‘Product’

Read the whole post –> A fleshed out ‘program logic’ for why and where ‘insiders’ are included in evaluation