The Rise and Risk of Evidence

Our guest blogger this week is Katherine Hay, a senior member of the Evaluation Unit of the International Centre for Development Research. Based in New Delhi, India, she is an expert on the role of evaluation in development in South Asia. She promotes approaches that assess how women and other marginalized groups benefit from development in the region. Katherine joined IDRC’s South Asia office in New Delhi in 2000 and has undertaken research in South Asia for more than 15 years. Her work with IDRC includes building evaluation curriculum in universities in the region, and supporting evaluation communities of practice spanning South Asia and Afghanistan. She has written on women’s empowerment, evaluation, and the policy research environment in South Asia. Katherine holds a master’s degree in international affairs from Carleton University in Ottawa. Katherine is sharing with us perspectives from her recent keynote address to the conference of the Sri Lankan Evaluation Association.

In reading the newspapers lately, I’ve noticed an increasing expectation that evidence can give us the answers that policy makers need. I practice evaluation because I believe that evaluation can help identify what is working from what is not working, and for whom.  So I should be pleased to see these calls for “the evidence.”   I am….and yet, I am also somewhat alarmed by this faith in data.

Some people seem to suggest that if we would just get enough evidence we will be able to ‘fix’ poverty.  I think that is both naïve and dangerous. In the New York Times, Nicolas Kristoff had a piece, called “Getting Smart on Humanitarian Aid,” where he said: “How can we most effectively break cycles of poverty? For decades, we had answers that were mostly anecdotal or hot air. But, increasingly, economists provide answers that are rigorously field-tested.” That sounds good but do we really have answers, and to what?

The evidence that Kristoff was pointing to drew on the excellent work of Duflo and Banerjee on randomized controlled trials.  Kristoff, and a string of other journalists, came to the conclusion that “we now have the answers” based on 2-3 examples that included the cost effectiveness improving school attendance by deworming kids and providing them with school uniforms.  I’ve read the studies.  I’m pretty convinced that schools should deworm and that school uniforms in Africa are probably worth the money. But do education policy makers now have all the answers whereas before they just had ‘hot air?’  Not quite.

These are fairly simple interventions.  I don’t doubt that they are helpful. But idea that we have all the evidence we need or can get it through trials, is not helpful.  It dumbs down development problems by arguing that, until now, everyone working in development has been running around with no clue.  It suggests that governments, implementing agencies, funding agencies, just need to run some experiments to find out what the policy should be. It’s a simple idea.  But poverty and development are complex.

There is nothing wrong with experiments.  The right tool in any situation is the one that best answers the questions being asked.  My critique is of the idea that development is just about getting the data right, or that evidence ‘neutral’ or has nothing to do with politics.

Why is this a dangerous idea? Kristoff goes on to suggest that “For those who want to be sure, to get the most bang for your buck, there is also a “proven impact fund” that supports interventions like deworming…that have proved to be cost-effective in rigorous trials. But what would happen if we only  fund the proven, cost effective things, the sure things?  It’s hard to be sure about many things that matter.

Funding only the sure things would certainly rule out a great deal of things that many of us think are important including work to address:  climate change, violence against women, son preference, human rights, or conflict.   Much of this work takes generations to see results and is deeply contextual; in many of these areas we don’t have ‘sure things.’

4 comments to The Rise and Risk of Evidence

  • Abdul Ghani

    I read the veiw of Ms.Katherin Hay and in fact I was one of the participants of conference where she deliver the keynote address. Her view was interesting and for me it was eye oppening as in our country recently evaluation has been a fashion and every body claims that all policies should be evidence based. Since that is widely accepted every thing should be based on evidence and even some small scale studies about an intervention is some time generlized where in some cases produce the desired result. Even some time as it is becoming fashion for policy makers saying that they make decsions based on evidences so some evidences are produced based on polically motivated or some time taking in to account the interst of specific organization to justify their interventions.

  • Patricia Rogers

    The quote from Kristoff is one of a series of descriptions of the history of development evaluation which present an inaccurate and problematic dichotomy: anecdotes and hot air or RCTs. A similar view is presented in Esther Duflo’s TED talk (http://www.ted.com/talks/esther_duflo_social_experiments_to_fight_poverty.html) where she presents the alternative to RCTs as being continuing to use leeches without any idea whether or not they are effective. Instead, the history of health research shows clearly that a range of research methods and designs have been used to build knowledge about what works, for whom and how.
    And, as Katherine points out, there is more needed than just the evidence – also attention to values and to processes for using evidence in real world contexts.

  • Patricia Rogers

    And, as Robert Picciotto has pointed out in his presentation to the Auckland branch of anzea (http://www.anzea.org.nz/index.php?option=com_content&view=article&id=52&Itemid=63), there is more to rigor than addressing selection bias. Attention needs to be given to other issues such as measurement and to external validity.

  • Mario Bucci

    As Ms. Hay very rightly wrote: “The right tool in any situation is the one that best answers the questions being asked.” Not all questions can be answered satisfactorily by RCTs. And, I would add on a more personal basis, the most interesting questions are the ones that cannot. The examples that Ms. Hay gives about tests in schools are telling. Is education policy, at any level, be it national or at a school level, only a matter of comparing actions? or of deciding whether or not a specific action should be implemented? When one comes to having to deal with this type of questions, it means that more complex questions have already been answered, maybe in an implicit or tacit way. And, probably without any reference to “evidence” but based on many “interests” and “values” …