When there is disagreement about key findings in evaluation (or, in science, or the world in general), should some opinions – like actual expert opinions – be given more weight?
This group of Australian climate scientists thinks so, and have put their message in this creative video (if you can’t view it below, e.g.
Read the whole post –> The Friday Funny: I’m a climate scientist!
Standard ‘mainstream’ belief is that one element of credibility as an evaluator comes from one’s independence and the perceived objectivity (lack of bias) that derives from that.
Here in Aotearoa New Zealand, we often find the opposite is the case: one’s credibility with the community and the provider – and with funder and external
Read the whole post –> Credibility and independence in evaluation – an alternative view
So you’ve put out an RFP for an evaluation of a policy, program or initiative intended to serve and effect positive change in a “minority” community. All the proposals look terribly impressive, and they all include “cultural experts” on the evaluation team. How can you distinguish the proposals that show a clear understanding of what it takes to do effective and culturally responsive evaluations from those that merely pay ‘lip service’ to cultural competence?
Read the whole post –> How to spot a ‘lip service’ approach to culturally responsive evaluation (a checklist for evaluation clients)
**Revised and updated, March 4th** Here’s an elaborated version of the table presented in our earlier post that discussed the implicit reasons for creating evaluation teams with cultural (and other) ‘insiders’ in different proportions and in different roles. The table is presented in four columns: * Implicit “Problem” or “Challenge” Addressed * Insider Inclusion Rationale * Likely Practice Implication (How ‘Insiders’ Are Involved) * Likely Evaluation ‘Product’
Read the whole post –> A fleshed out ‘program logic’ for why and where ‘insiders’ are included in evaluation