|The scientists are at it again, confessing how experiments really get done:
“We incubated this for however long lunch was.”
“Experiment was repeated until we had three statistically significant similar results and could discard the outliers”
“Incubation lasted three days because this is how long the undergrad forgot the experiment in the fridge”
“Slices were left in a formaldehyde bath for over 48 hours, because I put them in on Friday and refuse to work weekends.”
“We used [program] because doesn’t everyone else?”
For more, see the link above or follow the twitter hashtag #overlyhonestmethods
But, more importantly, let’s have a go at our own (thanks to Ann Emery for sending out the challenge on twitter!) …
How evaluation really gets done
- We used [evaluation approach] because it’s our favorite.
- We turned all the key evaluation questions into interview questions and reported a summary of what people said as our evaluation findings.
- We used “mixed methods” (i.e. long quantitative survey with a few open-ended “comments?” questions) – and then reported the quantitative and qualitative results completely separately because we didn’t really know how to ‘mix’ the methods.
- We graphed absolutely every result because it was an easy way to pad out the report and make it look impressive.
- We didn’t bother asking any high-level evaluative questions just in case anyone actually expected us to answer them.
- Causal inference was too hard, so we redefined the word “outcome” so that it didn’t necessarily mean anything that “came out of” the program.
- We created a long list of detailed recommendations to make the report look really useful.
- We wrote a clear, concise, focused report as a first draft – but then the client asked us to make it bigger and harder to read so that it would look like they got value for the money spent on the evaluation!
What would you add? Go to the post on our site to add a comment. And then tweet it out using the #overlyhonestmethods and #eval hashtags!