The Friday Funny: I’m a climate scientist!

Tweet

When there is disagreement about key findings in evaluation (or, in science, or the world in general), should some opinions – like actual expert opinions – be given more weight?

This group of Australian climate scientists thinks so, and have put their message in this creative video (if you can’t view it below, e.g.

Read the whole post –> The Friday Funny: I’m a climate scientist!

Credibility and independence in evaluation – an alternative view

Tweet

Standard ‘mainstream’ belief is that one element of credibility as an evaluator comes from one’s independence and the perceived objectivity (lack of bias) that derives from that.

Here in Aotearoa New Zealand, we often find the opposite is the case: one’s credibility with the community and the provider – and with funder and external

Read the whole post –> Credibility and independence in evaluation – an alternative view

How to spot a ‘lip service’ approach to culturally responsive evaluation (a checklist for evaluation clients)

So you’ve put out an RFP for an evaluation of a policy, program or initiative intended to serve and effect positive change in a “minority” community. All the proposals look terribly impressive, and they all include “cultural experts” on the evaluation team. How can you distinguish the proposals that show a clear understanding of what it takes to do effective and culturally responsive evaluations from those that merely pay ‘lip service’ to cultural competence?

Read the whole post –> How to spot a ‘lip service’ approach to culturally responsive evaluation (a checklist for evaluation clients)

A ‘program logic’ for including ‘outsiders’ in evaluation teams

Suppose you are an evaluator looking to put together a team of colleagues to bid on an evaluation of a program that primarily or exclusively targets members of your own ‘culture’ (ethnicity, gender, sexual orientation, life/health/social history, profession or disciplinary roots, etc – yes, everyone is a member of several ‘cultures’). What are the various reasons for including outsiders (people from outside that culture) on your evaluation team? What is the implicit “problem” or “challenge” you would be responding to with that rationale? In what roles would outsiders be involved? How would that influence your evaluation ‘product’ (the services and the report delivered?

Read the whole post –> A ‘program logic’ for including ‘outsiders’ in evaluation teams

A fleshed out ‘program logic’ for why and where ‘insiders’ are included in evaluation

**Revised and updated, March 4th** Here’s an elaborated version of the table presented in our earlier post that discussed the implicit reasons for creating evaluation teams with cultural (and other) ‘insiders’ in different proportions and in different roles. The table is presented in four columns: * Implicit “Problem” or “Challenge” Addressed * Insider Inclusion Rationale * Likely Practice Implication (How ‘Insiders’ Are Involved) * Likely Evaluation ‘Product’

Read the whole post –> A fleshed out ‘program logic’ for why and where ‘insiders’ are included in evaluation