Part III: Funding Randomised Control Trials
Recently, Giving What We Can staff sat down with Holden Karnofsky, the co-founder of GiveWell. The goal was to try and critically explore GiveWell's methodology, research approach, and recommendations. The full interview can be downloaded as an MP3.
You can also read about GiveWell's skepticism and giving now vs giving later.
Rob: While you think there's a lot of value in learning about giving opportunities, you haven't recommended any charities that are explicitly trying to do this, like the randomised controlled trials of JPAL or IPA. Probably the reason you haven't done that is it's harder to get evidence of their impact - connect the dots between the reports they write and spending by governments or aid agencies. Is this a mistake?
Owen: If you think there is some kind of market efficiency in charity giving, isn't getting more information out there about the best charitable intervention going to have be massively good?
Holden: I think that's a reasonable argument. Randomised controlled trials are a good example. We've had tons of ideas about RCTs, but we dropped most of them, because the softer the data is, the more it helps to be an expert in the area before you have faith in your ideas. We've just never come to an idea about RCTs that's worth our time trying to write.
Owen: We're not suggesting that you actually do RCTs, but why hasn't GiveWell looked into a charity that does RCTs in global health?
Holden: The reason we recommend what they do is that we think they are the best giving opportunities we can find and we have a good view of how they will go. We've tried to understand what the general operations support of JPAL and IPA do and we've even written a review of IPA, but we couldn't get any good idea of what they would do with more funding from that.
Perhaps you would say that more funding would produce more RCTs, but keep in mind the timeline - good RCTs take years from start to finish. The one's we've learned from are the ones in global health where there are a lot of them and places like Cochrane Reviews have pulled them together, and we haven't learned as much from RCTs looking more into social science, and we don't think they're worth recommending.
By directing more money toward producing RCTs, it seems we'd just be increasing the supply of RCTs, which is not the same as GiveWell forming hypotheses and seeing how the charity will proceed, given more money.
Rob: You've said that when you've found it difficult to determine what JPAL or IPA would fund with extra money, and they probably don't even know what their marginal product is because of the timescales involved. But is it a sensible requirement to know what they will do? If these organisations have been sensible in the past, might they not make good choices in future?
Holden: I think I've misunderstood the question. There are two potential goals - form hypotheses and follow impact of donations and just increasing RCT supply. I agree that donating to IPA or JPAL will increase the supply of RCTs, but I disagree that they'll help with our hypotheses. Only a minority of RCTs actually change the way we think about aid.
Rob: When you're looking at JPAL or IPA, what are you comparing them to? Isn't it so hard to find the impact of any of that, and therefore we'll never know what it's most appropriate to compare JPAL and IPA to, especially when trying to figure out how much we should adjust our estimates downward based on considerations not included in the explicit cost-effectiveness estimations?
Holden: This is another are where there are thousands of possible comparisons and we should be thinking about that rather than looking for one particular line of comparison. There's a general comparison we can make to the track record of quantitative social science and the track record of research in general influencing policy. Also, in general, we should just expect that opportunities to do good will be closer to the average than they initially look, especially when we don't know much about them.