The Friday Funny: Intradisciplinary evaluation of scientific papers

Just to follow up on one of Michael Scriven’s comments in the Rethinking all of evaluation discussion, where he reminded us of “a Copernican revolution in the perceived status of evaluation”:

It is the recognition that evaluation is in fact the logical backbone of every discipline, the indispensable guardian of quality that separates pseudo-science from real science, astrology from astronomy, creationism from evolution, etc. It’s no mere matter of taste that establishes the difference between good and bad explanations, data, and inferences, and the ability to make those distinctions—i.e., the ability to evaluate—is essential to science; without it, there’s only empty rhetoric and mere matters of taste.

Elsewhere, Michael has referred to this as ‘intradisciplinary’ evaluation. It seemed apt for our Friday Funny this week to recycle a gem posted on EVALTALK 6 years ago by Tom Lengyel, picked up on another listserv many years before (original authorship unknown) …


In each couplet, the first is the phrase as it appears in the scientific literature;
the second is the translation as to what it *really* means:

phrase as it appears in the scientific literature what it really means
It has long been known that… I haven’t bothered to look up the original reference.
It is believed that… I think…
It is generally believed that… A couple of other guys think so, too.
It is not unreasonable to assume… If you don’t believe this, you might as well stop reading here.
A preliminary examination revealed… One of my grad students pointed this out to me.
Four samples were chosen for further study. The others didn’t make sense, so we ignored them.
Results from the third sample may be of somewhat lower confidence… I dropped it on the floor.
…but are consistent with the data obtained from the other samples. …but scooped most of it up.
Handled with extreme care during the entire procedure… NOT dropped on the floor…
Typical results are shown. The best results are shown.
Correct within an order of magnitude… Wrong
Not inconsistent with other determinations, given our current limited understanding of this field… Meaningless
The significance of these results is unclear. Look at the pretty artifact.
It might be argued that… I have such a devastating rebuttal to this argument that I shall now deliberately raise it.
We are unable to reconcile our results with those of Hackenbush, but… Here comes some richly deserved character assassination.
While it has not been possible to provide definitive answers to these questions… The experiment didn’t prove anything, but at least I can publish the data somewhere.
Much additional work will be required. The paper isn’t very good, but neither is anyone else’s.
Of great theoretical importance… I got a paper out of it.
Of great practical importance as well… I got a grant out of it, too.
These investigations proved highly rewarding… My grant is going to be renewed.
Thanks are due to Joe Blow for laboratory assistance and to Jane Doe for many valuable discussions… Joe did all the work, and Jane explained it to me.
A definite trend is evident… …These data are practically meaningless.
These results will be shown in a subsequent report… …I might get around to this if I’m pushed.
The most reliable results are those obtained by Jones… …He was my graduate student.
It is clear that additional work will be required before a complete understanding of the phenomenon occurs… …I don’t understand it.
It is hoped that this study will stimulate further investigation in this field… …This is a lousy paper, but so are all the others on this miserable topic.
A careful analysis of the available data… …Three pages of original notes were obliterated when I knocked over a beer.
A statistically oriented projection of the significance of the findings… …Wild guess.

And the moral of the story? Genuine evaluation, whether intradisciplinary or not, requires the ability to see through unintelligible ‘code’ and unearth what is really going on – and to resist the pressure to translate the evaluation findings back into a similarly evasive and misleading ‘code’ in order to pacify key stakeholders.

6 comments to The Friday Funny: Intradisciplinary evaluation of scientific papers

  • Kataraina Pipi

    That’s funny alright – Let’s have a go at doing something like this in the cultural competency space, that is, an evaluative comment and what it really means in cultural terms
    Example: The cultural elements of the programme were indistinguishable – I didn’t really know what the heck I was looking for, so…what the heck

  • Jane Davidson

    LOL, great idea, Kataraina!! Shall we try for that as next week’s Friday Funny?

    Anyone else have suggestions of typical phrases and their true meaning? Chime in (or send via the contact page) and we’ll use them to develop a new one that can do the rounds of blogs and listservs.

  • Hi All

    I don’t know what is joke and what reality is! Moral point that I learn from this amusing post! Is: In our evidenced based mask at our evaluation world and words it is easy to say: These are Pseudo evaluations approaches!
    For more information please read this article: Neither too narrow nor too broad



  • Michael Scriven

    “Some intercultural variance was observed” = These guys are TOTALLY weird

    “Encouraging progress towards multicultural agreement was achieved” = one respondent got the right answer on the question about standard deviation on the posttest

  • Moein,

    Thank you. I had not realized that someone actually read my dissertation!

  • I wish to acknowledge the contribution of … I stole their ideas
    As confirmed by … they stole my ideas
    Popularised by … they made a fortune out of it
    Although initially successful, subsequent research has shown …. they stole my ideas, made a fortune out of it and now I’m about to get my own back