No baseline? A few tips

No baseline? Not ideal, but there are ways around it. A few tips …

The first places I go are (a) the people who were around early on when the program or policy was launched and (b) the documentation that got it approved and funded.

Somewhere in there you will find “why we needed this” – a summary of the pre-implementation situation, why it needed to be improved, and (if you’re lucky) why this particular initiative is the right approach. At the very least there should be (or, you can ask people for) a description of the baseline situation.

Hopefully there’ll be some indicators or metrics available, but don’t discount the usefulness of qualitative descriptions. And don’t forget to ask folks in the community (e.g. elders), not just officials or program staff.

If it’s not in the program documentation, hunt for the same information elsewhere – in the community, or among others who work with the communities (e.g. service providers, volunteers).

The second strategy is, when you go to capture evidence of important outcomes, some of those will be things you can gauge through surveys and interviews. In this case, build into your instruments two-pronged questions that ask not just “where are things at now” but also “how was it before the program started”.

There’s also a need to build in some way of capturing causal contribution. Even with fairly messy evidence, it’s still possible to find at least some evidence. I have a bunch of methods you can use with qualitative evidence as well.

I’m planning to run some webinars on these topics in 2013, so if anyone is interested, please add yourself to my newsletter to get updates, and I will keep you posted! http://realevaluation.com/about/join/register-2/

2 comments to No baseline? A few tips

  • Kia ora Jane – great crystal ball gazing techniques. Retrospective pre-testing is a favourite of mine so I agree about asking people, “Knowing what you know now, where do you think you were at before this initiative?” This is because people don’t know what they don’t know (as our colleague Kataraina Pipi is so good at reminding us), so we need to get them to assess their own baseline.
    I’ve also gone one step further and asked people how much of any change between ‘then’ and ‘now’ they’d attribute to the initiative. I’ve often been surprised that people understand this question and put this down to the skill of the interviewers asking it. We often get a percentage out of people, “What percentage of any change you’ve experience would you say was because of the initiative?” Then they can explain the other things in their lives that may have also have helped them out, or perhaps prevented the initiative from helping them as much as they thought it could have if things had been different for them. So a mix of quantitative and qualitative data – a beautiful thing!
    Count me in on those webinars! Merry and happy, Fi

  • Kia ora Fi,

    I think evaluators and researchers often discount one of the most important approaches ever to inferring causation – “just ask people!”

    Like you, I’ve been amazed at how clear people are about how influential a program was in contributing to outcomes. But, as you say, there’s some skill in asking about it the right way.

    Look forward to you launching some webinars too, Fi! :)

    Beaming in from a rainy Christmas day and kid chaos!
    Jane