Friday, June 13, 2008

Randomized Experiments

Great piece in this week's Economist about the increasing use of randomized experiments in the field of development economics and the methodology's potential usefulness in informing policy. (Here is an earlier post in this space on the same topic.)

I did have one minor issue with the following passage:

Go back to the bednets once more. You might conclude that the trial showed that they should always be given away. Yet it turns out that millions of nets were already in use in the part of Kenya where the field trial took place, so their value was known. The experiment guaranteed supplies, so it did not test the assertion that you need to charge something to encourage reliable suppliers. And the recipients were pregnant women, whereas the point of giving bednets away is to provide anti-malaria treatment universally. The evidence from western Kenya was clear. But it hardly settled the question of whether the government should give bednets away across the country. Questions like that may still have to be made on the basis of the soft evidence that randomistas turn up their noses at.

In my eyes, this point really gets at the value of replication. An well designed experiment carried out at a single point in time and space offers a way to get around statistical bias. However, the generalizability of such results is never clear. Reasoned projection to other contexts is one way to apply the results of one experiment to another context. However, I would think the ideal way to get at this would be, if possible, to replicate the experiment in different regions and time-points. This is has been long recognized in the medical world: everyone seems to realize that you can't rely on a single drug trial on a group of middle-aged males to project treatment effects across age and gender (though actual practice has not quite caught up to this belief).

Replication is likely undervalued among researchers and the money-men alike. Re-doing someone's experiment won't get you published in the American Economic Review (unless you find the opposite result or something) and once a large experiment for one purpose has been funded, I can see the loss of interest among donors to do it again. Hopefully, this incentive structure changes so that we can derive true context-specific treatment effects that, combined with reasoned projections (we do want to stop conducting the same study whence reasonable), can give us a better sense of policy directions, as opposed to projecting wildly based on single studies alone.

No comments: