The trials and tribulations of trials

Katherine Hay continues her guest blogging on evidence and evaluation.

Ben Goldacre in The Guardian wrote that UK politicians “are ignorant about trials and they’re weird about evidence.” He contrasts this with international development where he talks about the “amazing work testing interventions around the world with proper, randomised trials.”  He goes on to say, policy makers in the UK, just need to “define your outcome, randomise…and you’ll have the answer by the end of next parliament.”  He notes all these trials (somehow) won’t cost money but will save unprecedented amounts of money.  He then concludes that, “politicians are …too arrogant to have their ideologies questioned, and too scared…of hard data on their interventions.’

It’s an entertaining article.  The idea of doing a trial on every single UK policy is funny.  The idea that they are free is even funnier.  Imagine how British parents would react when they brought their kids to school and were told what group their child was randomly selected to be in?  Perhaps:

In a large class but with a highly rated teacher.

  • In a small class but with a less experienced teacher.
  • In a small class with no hot lunch…

And so on.   The permutations to test every UK policy would be never-ending.

Goldacre is obviously being extreme to make a point.  But is he correct?  Have countries who conducted randomized trials saved huge amounts on their interventions?

I’ve seen no evidence that trials are more likely to inform policies than other evaluations or research. I expect that they are subject to the same challenges of use as other types of evidence.

If we accept that “working” can mean different things to different groups, and that views on what is ‘worth the money’ usually vary based on people’s values and their position in society, than why would we assume that studies with statistical power will lead to change on the ground? Evaluation can give us more evidence – and must give us better quality evidence – but the idea that policy making is just a computation of evidence is wrong.   Evidence is only one piece of policy making.  Evidence can, and often is, interpreted and used to reinforce dominant policies.

For example, the country where I live, India, has a system that distributes grains to the poor.  Some people think this system should be replaced with a system where cash is giving to poor families who can buy the food or grains that they choose.  Others feel that dismantling this system will mean food grain that was getting to the poor and children will be replaced by spending on things like alcohol.  Different groups have done studies on whether people want this change.  Some studies show that people do and some studies show that people don’t.

Part of the solution is about design.  You have to be confident that the evidence you have is good quality.  Do people really want it or not? But it’s not just about design.  Even with convincing findings, the policy maker has multiple elements to weigh. In New Delhi, the capital, the government decided that they wanted to experiment with the cash transfer.  But was it because they were comfortable having their ‘ideology tested’ or because their ideology lead them to prefer such a system? They were criticized for the latter; for having a position.  But they were elected on their positions; pushing for more open reality testing is not about wishing away positions.

If they get the design right they may know pretty accurately how many families want or do not want changes in this system and if they run a trial well they may also know about some outcomes.  But that data won’t tell them the ‘right choice’.  For example, how much of an increase in alcohol consumption is ok, or is trumped by increasing the poor’s control of spending choices, or increased efficiencies?  Those decisions are values based and values are often political. For example, Abhijit Sen, a noted economist and member of the Indian Planning Commission, noted “politicians will never accept a dismantling of the PDS’ but added, “Forget the politicians, what matters most is what the voters think.”

We cannot wish away politics and nor should we want to.  My point is that we need to get much more strategic on pathways to use if we want to influence policy with evaluation.

Let me give you a wonderful final little example. Two PhD candidates from Yale did an experiment in a New Delhi slum. The subjects wanted to apply for ration cards. They were randomly assigned to one of four groups. The first group applied for the ration card and did nothing more, the second attached a letter of recommendation from an NGO to their application, the third paid a bribe after putting in their application, and the fourth enquired about the status of their ration card application through a right to information (RTI) request.  The researchers found that the group that paid a bribe was the most successful, but, the group that put in an RTI request was almost as successful. Hardly anyone in the other two groups received their ration card.

Clever experiment.  They answered an interesting question and will likely get their PHD’s in the process.

But corruption in the ration card system is not fixed. Also, NGO’s and others were already using the RTI for things exactly like this.  This experiment adds more evidence to existing evidence that the RTI Act is a useful tool against corruption.  Is it THE answer?  No.  Is it helpful?  Yes.

Assuming that evidence alone will change things is wrong.  Evidence matters, and should be made to matter more, but it’s not the only thing that matters.  Recognizing this doesn’t weaken evaluation, quite the opposite, it actually creates greater opportunities for ‘genuine evaluation’




Comments are closed.