The Friday Funny: Overly Honest Methods in Evaluation

The scientists are at it again, confessing how experiments really get done:

“We incubated this for however long lunch was.”

“Experiment was repeated until we had three statistically significant similar results and could discard the outliers”

“Incubation lasted three days because this is how long the undergrad forgot the experiment in the fridge”

“Slices were left in a formaldehyde bath for over 48 hours, because I put them in on Friday and refuse to work weekends.”

“We used [program] because doesn’t everyone else?”

For more, see the link above or follow the twitter hashtag #overlyhonestmethods

 

But, more importantly, let’s have a go at our own (thanks to Ann Emery for sending out the challenge on twitter!) …

How evaluation really gets done


  1. We used [evaluation approach] because it’s our favorite.

  2. We turned all the key evaluation questions into interview questions and reported a summary of what people said as our evaluation findings.

  3. We used “mixed methods” (i.e. long quantitative survey with a few open-ended “comments?” questions) – and then reported the quantitative and qualitative results completely separately because we didn’t really know how to ‘mix’ the methods.

  4. We graphed absolutely every result because it was an easy way to pad out the report and make it look impressive.

  5. We didn’t bother asking any high-level evaluative questions just in case anyone actually expected us to answer them.

  6. Causal inference was too hard, so we redefined the word “outcome” so that it didn’t necessarily mean anything that “came out of” the program.

  7. We created a long list of detailed recommendations to make the report look really useful.

  8. We wrote a clear, concise, focused report as a first draft – but then the client asked us to make it bigger and harder to read so that it would look like they got value for the money spent on the evaluation!

What would you add? Go to the post on our site to add a comment. And then tweet it out using the #overlyhonestmethods and #eval hashtags!

1 comment to The Friday Funny: Overly Honest Methods in Evaluation

  • Overly Honest Evaluator

    We used 8-point font in our research poster so nobody at the conference would bother reading about our botched study.

    We surveyed somewhere between 80 and 90 people, depending on you clean the data.

    We transcribed all the interviews, except for the ones that didn’t support the findings the clients were looking for.

    We used a Mr. Fancy Person’s Name statistical test because that’s the first formula we saw when flipping through our old statistics textbooks.

    The first suggestion on our list of recommendations is that “more research is needed” to make sure we get re-hired for a second year.

    We rotated and flipped the graphs on their axes to make sure “all the numbers were going up” like the client requested.