Our guest blogger this week is Katherine Hay, a senior member of the Evaluation Unit of the International Centre for Development Research. Based in New Delhi, India, she is an expert on the role of evaluation in development in South Asia. She promotes approaches that assess how women and other marginalized groups benefit from development in the region. Katherine joined IDRC’s South Asia office in New Delhi in 2000 and has undertaken research in South Asia for more than 15 years. Her work with IDRC includes building evaluation curriculum in universities in the region, and supporting evaluation communities of practice spanning South Asia and Afghanistan. She has written on women’s empowerment, evaluation, and the policy research environment in South Asia. Katherine holds a master’s degree in international affairs from Carleton University in Ottawa. Katherine is sharing with us perspectives from her recent keynote address to the conference of the Sri Lankan Evaluation Association.
In reading the newspapers lately, I’ve noticed an increasing expectation that evidence can give us the answers that policy makers need. I practice evaluation because I believe that evaluation can help identify what is working from what is not working, and for whom. So I should be pleased to see these calls for “the evidence.” I am….and yet, I am also somewhat alarmed by this faith in data.
Some people seem to suggest that if we would just get enough evidence we will be able to ‘fix’ poverty. I think that is both naïve and dangerous. In the New York Times, Nicolas Kristoff had a piece, called “Getting Smart on Humanitarian Aid,” where he said: “How can we most effectively break cycles of poverty? For decades, we had answers that were mostly anecdotal or hot air. But, increasingly, economists provide answers that are rigorously field-tested.” That sounds good but do we really have answers, and to what?
The evidence that Kristoff was pointing to drew on the excellent work of Duflo and Banerjee on randomized controlled trials. Kristoff, and a string of other journalists, came to the conclusion that “we now have the answers” based on 2-3 examples that included the cost effectiveness improving school attendance by deworming kids and providing them with school uniforms. I’ve read the studies. I’m pretty convinced that schools should deworm and that school uniforms in Africa are probably worth the money. But do education policy makers now have all the answers whereas before they just had ‘hot air?’ Not quite.
These are fairly simple interventions. I don’t doubt that they are helpful. But idea that we have all the evidence we need or can get it through trials, is not helpful. It dumbs down development problems by arguing that, until now, everyone working in development has been running around with no clue. It suggests that governments, implementing agencies, funding agencies, just need to run some experiments to find out what the policy should be. It’s a simple idea. But poverty and development are complex.
There is nothing wrong with experiments. The right tool in any situation is the one that best answers the questions being asked. My critique is of the idea that development is just about getting the data right, or that evidence ‘neutral’ or has nothing to do with politics.
Why is this a dangerous idea? Kristoff goes on to suggest that “For those who want to be sure, to get the most bang for your buck, there is also a “proven impact fund” that supports interventions like deworming…that have proved to be cost-effective in rigorous trials. But what would happen if we only fund the proven, cost effective things, the sure things? It’s hard to be sure about many things that matter.
Funding only the sure things would certainly rule out a great deal of things that many of us think are important including work to address: climate change, violence against women, son preference, human rights, or conflict. Much of this work takes generations to see results and is deeply contextual; in many of these areas we don’t have ‘sure things.’