The Friday Funny: Data as “the truth”?

In this age of “evidence-based” this and “data-based” that, it’s worth remembering that sometimes the foundations on which the evidence rests can be far flimsier than it might seem when all the data is loaded nicely into the data analysis package …

We found a couple of classics in this vein, submitted to EVALTALK in July 2003 …

The first is from Lisa Johnson, recounting a joke she’d heard at a recent local conference:

As evaluators we crunch our data by raising it to the “nth” power and then divide that number by “r”, only to find out that the data was entered by a night watchman who says he’ll enter whatever he damn well pleases.”

Another from Dave Colton, but credited to Bill Watterson:

Calvin: “I’m filling out a survey for ‘Chewing’ magazine.”
“See, they asked how much money I spend on gum each week, so I wrote,
‘$500.’ For my age, I put ’43,’ and when they asked what my favorite flavor
is, I wrote ‘garlic/curry’.”
Hobbes: “This magazine should have some amusing ads soon.”
Calvin: “I love messing with data.”
[credit: /Bill Watterson]

And a couple more from Mary Sehl:

The first is from comic strip Sherman’s Lagoon. I don’t know the name of the second character, but will assume he’s Sherman:

Fillmore: “What do you want on your pizza … pepperoni or sausage?”
Sherman: “That’s just the kind of multivariable problem that’s perfect for a computer spreadsheet.”
Fillmore: “On no … not another spreadsheet.”
Sherman: “Let’s quantify our parameters … on a scale from 1 to 10, rate your ingredients.”
Fillmore: “Pepperoni 8, Sausage 3.”
Sherman: “I’ll give sausage a 7, pepperoni a 4 … now let’s divide by our respective weights … do a little formatting. Presto, the answer is 48.
Fillmore: “48? What’s that supposed to mean?
Sherman: “It means 48. You can’t argue with a number, Fillmore.”
Fillmore: “But how does this result get us any closer to a decision?”
They look puzzled.
Sherman: “Let’s flip a coin.
Fillmore: “Heads we get pepperoni.”

The next is from Frank and Ernest:

Ernest: “Look, Frank, I filled out the experimental form for the next census.”
Frank: “Okay, but on the first line why did you write ‘fairly relaxed’?”
Ernest: “They asked for my ‘current state’.”
Frank: “And next you wrote ‘quarter past two’.”
Ernest: “For the ‘time at present address’.”
Frank: “How about ‘sitting in a chair’ and ‘red polka dot, 100 percent cotton'”?
Ernest: “They ask ‘present position’ and ‘brief description’.”
Frank: “Ernie, that survey of yours makes no sense at all!”
Ernest: “You’re telling me! Over here they say, ‘Do not write in space.’ And I’m not even an astronaut!”

9 comments to The Friday Funny: Data as “the truth”?

  • Carolyn Sullins

    I’ve used this story with some of my eval students re making sure your questions are understood (a real-life version of the Frank & Ernest cartoon):

    I was in the car with my 3-year-old and my newborn in the back seat. I slammed on the brakes to avoid a collision, then yelled, “IS THE BABY OK? TELL ME WHAT HER HEAD LOOKS LIKE!”

    After a thoughtful pause my 3-year old explained, “It’s little, and it has some black hair on it.”


    She paused longer this time, then realizing it was a trick question replied, “It’s round!”

  • Chad Green

    The image above should be reversed so that the brain itself is pushed through the funnel. What is the funnel in this scenario and what function does it serve?


  • At the risk of turning a humorous concept serious, I bumped into one of my old Psychology supervisors (Joel Michell) the other day, he is retired now, but is still writing and arguing that there is no such thing as measurement in Psychology. This has relevance for evaluators using psychometric instruments.

    While it may seem pedantic to quibble about whether something is measurement or assessment it makes a difference to how you analyse data. Correlations of psychological measures, such as levels of aggression and self-efficacy as reported on a scale may be invalid to the extent that the numbers being correlated are not themselves a linear additive function. For example, our statistics often assume that a rating moving from 4-5 is of the same magnitude as a rating moving from 1-2. If they are not them we are severely limited in our choice of statistics, especially more complex statistics. We would best stick to statistics based on ordinal data, rather than assuming that our data follows the rules of scale data.

    There are methods to work out whether self report scale data is in fact measurement as understood in the physical sciences (about which my memory is very hazy) using conjoint measurement and unfolding.

    I copied this from Wikipedia

    “The definition of measurement in the social sciences has a long history. A currently widespread definition, proposed by Stanley Smith Stevens (1946), is that measurement is “the assignment of numerals to objects or events according to some rule”. This definition was introduced in the paper in which Stevens proposed four levels of measurement. Although widely adopted, this definition differs in important respects from the more classical definition of measurement adopted in the physical sciences, which is that measurement is the numerical estimation and expression of the magnitude of one quantity relative to another (Michell, 1997).”

    Michell, J. (1997). “Quantitative science and the definition of measurement in psychology”. British Journal of Psychology 88: 355–383.
    Michell, J. (1999). Measurement in Psychology. Cambridge: Cambridge University Press.

  • Andrew, your comment brings a couple of things to mind for me.

    One is a great quote (via a colleague on another listserv) from the late Neil Postman, who said:

    As I understand the word, science is the quest to find the immutable and universal laws that govern processes, and it does so by making the assumption that there are cause and effect relations among these processes . In this definition, I stand with Newton, and also with the last of the great Newtonians, Albert Einstein . It follows from this that can in no sense, except the most trivial, be called science . Indeed, it is one of these trivial senses that has led some people to embrace the misleading phrase “social science .” I refer to the fact that scientists, following Galileo’s dictum that the language of nature is written in mathematics, have found that by quantifying nature they can come as close as they dare hope to discovering natural law . But this discovery has led to the pretentious delusion that anyone who counts things is therefore engaged in doing science . A fair analogy to this line of thinking would be to say that a house painter and an artist, each using the medium of paint, must perforce be using it for the same reason . Which I need hardly point out is nonsense . The scientist uses mathematics to assist in uncovering and describing the structure of nature. At best, the sociologist, to take one example, uses mathematics merely to provide some precision to his ideas…

    From: Postman, N. (1984) Social science as theology. ETC: A Review of General Semantics, 41, 1, 22-32.

    The other thought that occurs is an ongoing puzzle I have had at the back of my mind as to why we worry about whether a short Likert-type scale is linear or not. If we’re doing any statistical analyses that assume a normal distribution (which is most of them), we transform them anyway to make the distribution normal – which then makes the scale definitely nonlinear even if it ever was!

    I generally avoid Likert scales as being virtually uninterpretable evaluatively, but that’s another story … :)


  • Patricia Rogers

    Thanks for the comments, Andrew. And of course all the Friday funnies have some serious intent underneath…

    I am also bemused by the uncritical way some people will treat anything once it has been quantified. For example, if you conduct indepth interviews with a number of key informants, purposefully chosen to represent different perspectives, and carefully triangulate the data, including observations of the program in action, some people will dismiss your findings as subjective, qualitative evidence. Whereas if you ask one self-selected person in an organization to fill in a questionnaire using a numeric scale (which is both subjective and non-transparently unreliable since different people will interpret the scale in different ways), it somehow is seen as credible and objective quantitative data. Which is nonsense.

  • Jane Davidson

    If it’s a number it must be the truth!! LOL

  • “Not everything that counts can be counted, and not everything that can be counted counts.” (Sign hanging in Einstein’s office at Princeton)

  • Thanks for the comments Jane and Patricia. Jane re the Likert scale transformation: I think while the data may be transformed such that the dataset represents a normal distribution of scores, the Likert scale itself requires the person making the original rating to turn their attitude (or whatever the items is asking about) into something that has linear increments. Happiness for example must be turned into a ‘1’ or ‘2’ or an ‘agree’ or ‘disagree’ as though this concept could be extended in space such that 1+2=3. I see your point about the data set then being transformed, but the error I am getting at is the validity of asking someone to transform a psychological event into a liner additive scale in the first place.

    I am interested in your ‘other story’ about avoiding Likert scales?

  • Kirsty Fenton

    I remembered the night watchman quote from a lit search – if you’re interested the reference is below.

    Smith (1989, in: Perrin, 2003, p18) quotes Sir Josiah Stamp’s warning about the use of statistics:

    “The government is very keen on amassing statistics. They collect them, add them, raise them to the nth power, take the cube root and prepare wonderful diagrams. But you must never forget that every one of these figures comes in the first instance from the village watchman, who just puts down what he pleases”

    Perrin, Burt. (2003) Implementing the vision: addressing challenges to results-focused management and budgeting. Available at:

    Smith, Midge F. (1989) Evaluability Assessment: A Practical Approach. Kluwer.