Simplicity and genuine utilization

What’s the relationship between simplicity and genuine utilization of evaluation findings? A recent paper from psychologists Christopher Peterson and Nansook Park considers what kind of psychological research has been the most influential and impactful over the years. Their conclusion: minimally sufficient research.

… the evidence of history is clear that the research studies with the greatest impact in psychology are breathtakingly simple in terms of the questions posed, the methods and designs used, the statistics brought to bear on the data, and the take-home messages.

Although our evaluands are often very different from the subjects of study in both basic and applied psychological research, there’s an important message here for evaluation too. How often do we see the use of overly complicated and sophisticated designs and analyses that, although they might unearth some academically interesting nuances, actually render the findings uncommunicable to the audiences that use them.

There’s a delicate balance to be struck here between simple “does it work?” evaluation questions that may wash out what’s really happening and the more important “what works for whom, under what conditions, and (when it’s useful to know) why?” And that’s where Peterson and Park provide an important distinction to bear in mind:

Simple does not mean simplistic. Nor does it mean easy or quick. Rather, simple means elegant, clear and accessible, not just to other researchers but to the general public. No one’s eyes glaze over when hearing about a high-impact study. No one feels stupid. No one asks, ‘And your point is?’

Every evaluation client on earth will doubtless resonate with that statement! And many of us have felt that way from time to time while listening to evaluation conference presentations.

But what is it that drives this urge to use overly complicated designs and analyses? I’ve mentioned elsewhere the need for evaluators to unlearn some of our social scientist habits. Peterson and Park agree that the current academic culture has a lot to answer for when it comes to the irrelevance and unusability of psychological research – do these comments not also apply to a lot of the evaluation work we see?

In the current academic culture, complex research designs or analyses create admiration and respect, even when unnecessary given the purpose of a study … Graduate students in particular have learned well this misleading lesson. When we talk to them at conferences about their work, they often regale us with procedural and statistical details of their research but rarely frame them in terms of what they hope to learn.

Once again we are seeing – in research as it is in evaluation – the absolutely central importance of clear, evaluative questions to guide the piece of work. What are we trying to learn for the primary intended users? If we don’t ask the important questions, we most certainly won’t get useful answers.

The most important studies in psychology are not just simple…. They are important because they are interesting, and because needless methodological and statistical complexity did not obscure the interesting points they made.

References/further reading:

Keeping it simple
Christopher Peterson and Nansook Park on the lasting impact of minimally sufficient research
The Psychologist, Vol. 23, #5, Pages: 398-401

Improving evaluation questions and answers: Getting actionable answers for real-world decision makers
Jane Davidson on ensuring evaluations actually ask evaluative questions and give clear, evaluative answers
AEA 2009 conference demonstration session, downloadable from AEA e-library

Unlearning some of our social scientist habits
Jane Davidson on how academic training in the social sciences can impede genuine evaluation
Downloadable: Journal of Multidisciplinary Evaluation, 4(8), 2007, pp. iii-vi.

Comments are closed.