Why genuine evaluation must include causal inference

A couple of years ago I was asked by a client organization to review a set of evaluation reports and was staggered to see the following disclaimer buried deep in the details:

“This is an outcome evaluation, in which actual changes are measured and documented, but it is not able to measure whether or not it is the program itself which causes the changes, sometimes called the impact of the program.”

The report had documented what variables/realities had changed during the course of the program with no attempt to say whether or not the program had anything to do with these changes – in other words, they may just have been coincidences. I was incredulous. How does the documentation of possibly unrelated changes say anything about the value of the program? And isn’t that what we are trying to find out here if this is an evaluation of the program?

But even more disturbing, in the context of something labeled an “evaluation report”, listing these changes as “outcomes” does actually imply to any reader who doesn’t dig as far as the fine print that the changes were at least partially caused by the program. After all, common sense says that “outcomes” are things that “come out” of the program. Right? I don’t actually think tacking a disclaimer on is sufficient. I don’t think the word “outcomes” should be used as a word to describe documented changes unless there is at least some evidence that they were caused by the program. In fact, even if the word “outcomes” is not used, the context of an evaluation report implies that any documented changes surely had something to do with the program (otherwise why would they be there at all?).

A large chunk of the problem, I think, is that many people are still stuck in the rut of thinking that causal attribution isn’t possible without a randomized experimental design, or randomized controlled trial (RCT). So, if they’ve been unable to incorporate this into their design, they either don’t mention causation (and imply it’s sort of there anyway, e.g. by using words like “outcomes”) or they throw their hands in the air and say they can’t say anything about it.

The way I generally approach this is with a fairly long list of methods for causal attribution, some qualitative, some quantitative, some mixed method, some more powerful than others, some feasible in some settings but not in others. Rather than choose one and expect it to ‘slam dunk’ the causal attribution question, I try and choose two or three that will as a set provide at least some support for (or, will help me rule out) a particular cause. In any particular decision making context, we need to ask how certain we need to be about this. I argue that even SOME partial evidence of a causal link is a lot better than none at all.

If anyone would like an example of this “patchwork causal inference” method in action, I have one online in a presentation I did locally last year: Causal Inference Nuts and Bolts

Here are the eight methods I use most often for causal attribution (they are illustrated in the presentation I linked to above; see also Evaluation Methodology Basics Chapter 5 on Causation):

  1. Ask those who have observed or experienced the causation first-hand
  2. Check if the content of the intervention (or, supposed cause) matches the nature of the outcome
  3. Look for distinctive effect patterns (Scriven’s modus operandi method)
  4. Check whether the timing of the outcomes makes sense
  5. Look at the relationship between “dose” and “response”
  6. Use a comparison or control (RCTs or quasi-experimental designs)
  7. Control statistically for extraneous variables
  8. Identify and check the causal mechanisms
1.

I probably need to revisit this list as I write the second edition of the book, so am very interested in other thoughts and experiences out there on the topic!

I guess my main argument is this: There are various fairly low-cost ways of getting an approximate answer to the causal question, and I think any genuine evaluation – yes, including those operating under serious budgetary constraints – should deliver on that rather than opt out. Thoughts?

[This post is a revised version of a discussion in the January 2010 AEA Thought Leaders’ Forum. The link only works if you are an AEA member (with a login) – but this is a great discussion forum, and one of many fantastic benefits for what I think is a very low membership fee.]

Comments are closed.