Giving What We Can’s mission is to be as rigorous as possible when recommending charities to donate to, using evidence, data, and reason to inform our research.
We draw on the best available research from institutions like GiveWell, MIT’s Poverty Action Lab, Innovations for Poverty Action, and the Centre for Global Development,and our research team reviews hundreds of peer-reviewed studies from the peer-reviewed literature.
Contents Our Methodology
- A top-down approach
- Why do we primarily recommend interventions in developing countries?
- Assessment criteria at the intervention level
- Assessment criteria at the charity level
- How we assess evidence
- Key terms
- About us
What sets our methodology apart from most other philanthropic advisors is our top-down approach. While we conduct our research in line with the priorities of the donor, we also want to provide information on the causes in which you will be able to make the most difference. The biggest variation in the effectiveness of a charity comes from the type of intervention they undertake: even though a certain level of a charity’s quality of implementation is important when it comes to effectiveness, once this level is reached, this factor is likely dwarfed by other factors such as whether the charity is active in a low income country where the need is greater and a dollar goes much further or is involved in a highly effective intervention. In other words, a charity involved preventing HIV/AIDS in a preventative way through condom distributions or education is likely always going to have a higher impact than a charity involved in treating symptoms of AIDS. We therefore focus on choosing charities which implement highly cost-effective interventions.
This approach is effective for identifying charities undertaking well-established and evidence-based interventions. However, it has a number of limitations:
- An effective charity might implement an innovative intervention which has no comparable equivalent in the academic literature.
- An effective charity might target important outcomes which are difficult to measure in terms of cost-effectiveness, such as policy change (We have recently begun to evaluate political lobbying charities using a slightly different approach).
- An effective charity might target high-risk, high-reward outcomes and so have a high probability of future success without a strong track record of past success
We therefore supplement our top-down approach by:
- Evaluating charities with innovative interventions based on (robust) self-evaluations
- Assessing a charity’s track record of achieving its unquantifiable goals using qualitative methods
- Using our experience and interviewing experts to assess the desirability of high risk-high reward outcomes, and their chance of success
We aim to provide recommendations on how to do as much good as possible with your donation. A fundamental principle of our recommendations is global humanitarianism, the belief that every life has equal intrinsic value, regardless of gender, sexual orientation or nationality. Our research has led us to conclude that interventions which target people in the developing world tend to be far more cost-effective (in terms of number of lives improved per dollar) than interventions in the developed world. The primary reason for this is that, in the developed world, much of the low-hanging fruit has already been picked. For example, NICE, the UK health prioritisation body, will fund any treatment that falls under a cost-effectiveness threshold of £25,000 for each year of healthy life. By comparison, the best available estimates indicate that you can prevent a death in a developed country for about $3,461 (about £2,500) by donating to the Against Malaria Foundation. While these estimates are highly uncertain, they're roughly comparable to the figure that health economists estimate it will cost in the coming years to prevent a death in low income countries from 2015-2030: $4-11k per death prevented. This large divergence indicates you can have far more impact by donating overseas now and in the years to come as the funding gap for health in developing countries is billions of dollars per year for the near future.
Other developed countries use similar health care prioritization approaches. Even though there are exceptions to the rule, this means that usually, it is difficult to find donation opportunities in developed countries that are highly cost-effective, as they will be funded by the government. Similar considerations can apply to research funding where research into diseases which primarily affect those in poorer countries are systematically underfunded.
When evaluating potential recipients of a donation, we aim to estimate the counterfactual marginal impact of a donation in terms of improving lives. Counterfactual asks the question of what would have happened if you hadn’t donated to the charity: would somebody else have donated then? Average cost-effectiveness just divides a charity’s budget by say the number of people helped. In contrast, marginal cost-effectiveness looks how much impact additional donation had on the margin. Where possible, we attempt to quantify this impact across a range to allow for easy comparison between different charities. Where the impact is impossible to quantify, we rely on qualitative analysis.
We use a framework (based off Givewell’s framework) to split counterfactual marginal impact it into its constituent parts. The main criteria we look for when evaluating interventions are:
Importance asks what the benefit of the intervention would be if it were successful in solving the problem. For example, developing a new medical treatment may be important if it treats a disease which affects a lot of people, has a very high impact on their lives, and current treatments are expensive or ineffective. Or rolling out a proven health intervention may be important if the disease it treats (or prevents) has a high impact on individual lives.
Take malaria for instance: it is a very big and serious problem. Take a look at the following graph - it shows the causes of all deaths of children under 5 annually. The size of the graph represents the fraction of the particular causes. The Global Burden of Disease study estimates that almost 10% of all deaths of children under 5 globally are due to malaria:
Tractability means that we give stronger weight to problems that we can plausibly make good progress on with our limited donations. Tractable problems are more likely to see results, so it’s more likely that trying to solve them will have a big impact. We still consider difficult problems — we just weigh them up against the other criteria. We are therefore less likely to endorse causes that hinge on a problem that seems impossible to solve or for which we would need more money than we have. For example, researching a new medical treatment may be tractable if there is a high probability that the research will in fact lead to a new medical treatment. Or rolling out a proven health intervention may be tractable if it can help a lot of people for a given amount of money. It is also important to remember that tractability is always subject to change. The problem of eradicating smallpox was once totally intractable, but became tractable after advances in immunology and healthcare delivery.
A good example of an extremely tractable intervention is school-based deworming - where every child gets a pill against worms and is free of parasites for at least a few months. You can see how efficient this is here:
Similarly, a billionaire could conceivably have a vaccine against HIV/AIDs developed, but this option is not open to small donors (this might change in the future if more people donate through Giving What We Can members and would coordinate to use their collective donation power to work).
Neglectedness means that the intervention is not currently receiving enough funding relative to its importance and tractability. Neglectedness is important because a neglected intervention can be scaled without reducing its marginal impact.
An intervention could be neglected for a range of reasons. It might not be commercially viable for a big pharmaceutical company to invest in the cure for a particular disease, or donors may have a bias to fund local causes in developed countries instead of donating overseas.
The opposite of neglectedness is crowdedness. Some interventions (that would otherwise score highly on our criteria) are in a crowded field, so the marginal impact of a donation will have diminishing returns and will not be as large as the average impact. For example, some vaccinations are among the most cost-effective health interventions, but are already well-funded. Neglectedness is not just a measure of how much funding the cause area currently receives. Some interventions (such as unconditional cash transfers) may have approximately linear marginal returns for a large amount of additional funding, despite the broad cause area of global poverty receiving a significant amount of attention.
For instance, while aid for emergency food aid has increased dramatically in recent years, basic nutrition such as micronutrient fortification has remained relatively underfunded:
The precise definitions of Importance, Tractability and Neglectedness depend on the problem which the intervention is trying to solve, and the manner in which it is trying to solve it. This framework is used as a rough indicator of whether charities working in this space might be very effective. Table 2 gives an idea of how we would apply these considerations for different types of intervention.
|Type of intervention:||Direct health interventions||Medical research||Political advocacy|
|Example of a problem it addresses:||A disease with a high disease burden (e.g. 200 million annual cases of a disease)||Currently no effective vaccine for a disease||A policy hinders developing world economic growth|
|Importance:||How bad is the disease for each individual?||How bad is the disease for each individual? How many people have the disease?||What would be the impact of changing the policy?|
|Tractability:||How many cases can be avoided / treated on average for $X?||How much progress is being made on the research for each $X spent on average? Is there enough funding to bring this project to fruition?||How much progress is being made on the policy for each $X spent on average? Is there enough funding to bring this project to fruition?|
|Neglectedness:||How different will the marginal impact be from the average impact? Have the low-hanging fruits already been picked?||How different will the amount of progress made for each additional $X spent be from the average? Are other (big) donors already very invested in the charity?||How different will the amount of progress made for each additional $X spent be from the average? Are other (big) donors already very invested in the charity?|
Counterfactual marginal impact of a donation
Table 1: Examples of the ITN framework applied to different types of charities
Cost per disability-adjusted life year (DALY) averted is the incremental cost to deliver incremental DALY savings versus only the standard of care. Probability of success is the estimate of probability of technical and regulatory success (PTRS) informed by industry benchmarks and expert opinion. NRRV: Non-replicating rotavirus vaccine. Both the cost and the probability of success are dynamic values and subject to change with information that is constantly evolving.
After identifying important, tractable and neglected areas, we can compare different interventions in several ways. One is measuring health outcomes in terms of Disability-Adjusted Life Years (DALYs) — a measure of disease burden calculated by summing years of life lost and and years of life lived with a disease. We can then measure cost-effectiveness in dollars per DALY averted — essentially, how many dollars are needed to enable one additional year of full health?
The Disease Control Priorities Project compares different health interventions based on their cost-effectiveness — the tables lists them below from the least cost-effective to the most cost-effective both in terms of dollar per DALY averted, and dollar per death averted.
Note that these cost-effectiveness estimates should not be taken literally and are always subject to substantial uncertainty. Moreover, in the past there have been significant errors, bias in cost-effectiveness analysis and variations in the use of methodologies. Nevertheless, we believe that comparative cost-effectiveness analysis is still a good proxy measure to find very effective interventions and can thus still provide a useful starting point for further inquiry. This is especially so, given that there are orders of magnitude differences between the best the best interventions and the average intervention. Moreover, health is not the only outcome that counts: the charities we recommend often carry out health interventions that have substantial co-benefits, such as improved child development, educational attainment, saved health care costs, or improved labour productivity. Thus, comparative cost-effectiveness analysis is not restricted to measuring health outcomes. Similar differences in cost-effectiveness can be found in $ per year of additional schooling (see Figure X), net present value analyses in terms of social, economic and environmental benefits per dollar spent (see Figure Y), and tonnes of CO2 equivalent per dollar spent (Figure Z).
We believe that the largest variance in a charity’s effectiveness is the type of intervention it carries out. However, the story does not end there. A particular charity may carry out a slightly different version of an intervention, or only operate in particular countries which have different characteristics from where the intervention has been evaluated in the past. When we recommend particular charities we therefore consider a number of additional factors which will influence its marginal impact. These will often be charity-specific but some common considerations are:
Does the charity have a track record of success in achieving its goals? Are the staff of the charity experienced experts in their field?
How will additional funds be used? Does the charity have the organisational capacity to absorb more funding in the short term? What is the long-term funding gap in the area that the charity is active in?
How does the cost-effectiveness of the charity seem to compare to the general cost-effectiveness of the intervention?
How robust is the evidence supporting the cost-effectiveness of the charity?
Does the charity have robust monitoring and evaluation procedures? Does it share this information more widely?
Is there (independently verified) monitoring and evaluation of a charity’s projects?
Many charities carry out a range of interventions. Are additional unrestricted funds likely to be used on the most cost-effective programmes? As a result of this consideration, we do not tend to recommend large organisations which undertake many interventions in our reports. While there are some benefits associated with large charitable organisations, such as economies of scale and synergies in reusing existing infrastructure and distribution networks, it is typically hard to know what your donation will be spent on as even restricted donations can be funged across the organisation.
Finally, it is important to note that all measures can only ever be a proxy measure of the true impact of a charity - we do not blindly rely on these measures and supplement our analysis with expert consensus when available.
When evaluating an intervention or charity, we assess the robustness of evidence. This is a measure of our level of certainty over whether an intervention actually works i.e whether we can attribute causality between an intervention and the effect it has.
Our charity recommendations are based on a variety of empirical methods. We do not exclusively rely on experiments or Randomised Controlled Trials (RCTs), but supplement our work with qualitative analysis, impact evaluations, cost-benefit analyses, scientific analyses, and expert opinion. When evaluating the scientific literature, we keep a hierarchy of evidence in mind (although this will depend on the characteristics of the particular studies and the question we are trying to answer):
- Systematic reviews systematically review the academic literature and assess the evidence on a given topic. Systematic reviews can be of varying quality. The best have a clear criteria for study inclusion and assess the quality of evidence in each study.
- Meta-analyses combine data from many different studies and apply statistical techniques to come up with a ‘pooled’ estimate of common effect. The advantage of a meta-analysis is high statistical power and generalisability across different settings. The best meta-analyses have an explicit procedure to deal with the problem of publication bias.
- RCTs with definitive results (confidence intervals that do not overlap the threshold significant effect). RCTs infer causality by separating participants into a treatment group and a control group at random. This avoids the problem of selection bias. RCTs are generally considered the ‘gold standard’ of evidence but are not always available or ethical.
- Natural Experiments where people are exposed to experimental and control that resemble random assignment. Natural experiments can often be very meaningful, because they suggest causality and can sometimes have larger sample sizes and follow participants over longer time scales than RCTs.
- RCTs with non-definitive results (a point estimate that suggests a significant effect but with confidence intervals overlapping the threshold for this effect)
- Computational modelling
- Cohort studies
- Case-control studies
- Cross sectional surveys
- Case studies
Ideally, the evidence would support a particular charity’s intervention directly, but often we have to rely on evidence of the effectiveness of an intervention generally. It is therefore important to assess the external validity of the study in question (how well it generalises across different settings). Some programs might work as a pilot project in one country at a particular point in time, but might not work in another country, or at a later point in time. Health interventions generally have much fewer external validity concerns than specific anti poverty programmes (e.g. a medicine usually works in more or less the same way in different countries, whereas an educational programme might not generalize across different countries).
We also take into account common sense considerations. In particular, different interventions face different burdens of proof. For instance, it is very plausible, before seeing the evidence, that an unconditional cash-transfer will reduce the poverty of a recipient at least in the short term (how long lasting the effect is another question). In contrast, we would require more proof that malaria net distributions reduce malaria transmission in the form of randomized controlled trials and meta analyses.
When assessing a charity, what matters is the overall impact that further donations will make. Once you know what a program is meant to achieve, there are also risks which need to be taken into account: Corruption, misappropriation of resources and simple administrative error, to name a few. We need a way to reflect the uncertainty created by these risks in our assessment.
We can use an expected value calculation to do this. Consider the example of charities A and B, both of which are running new campaigns:
- If charity A’s campaign is successful it will produce 20 quality-adjusted life years (QALYs) of good for every £1,000 donated. Although the campaign is new, it is based on past successes and is well researched. We assess the chances of its success at 90%.
- If charity B’s campaign is successful it will produce 30 QALYs of good for every $1,000 donated. The campaign is ambitious, but there are a number of risks which could cause it to fail. We assess the chance of success at 50%.
To compare these campaigns, we multiply the potential good done by the probability of it happening to get our final figures:
- Charity A: [20 × 90% =] 18 QALYs per $1,000 donated.
- Charity B: [30 × 50% =] 15 QALYs per $1,000 donated.
As a result, we can see that at this point charity A is the better prospect, even though it’s campaign is less ambitious. Of course, if charity B is successful with its campaign, then the chances of it continuing to be successful will rise, and the calculation will change.
|Years Lived with Disability - the amount a disease affects someone is given a disability weight. The disability weight is multiplied by the number of years lived with that disease to calculate YLD|
|Years of Life Lost - the number of years of potential life lost (relative to an idealised life expectancy) from a particular disease or risk factor. This is equal to a disability weight of 1.|
|Disability-adjusted life year; a measure of disease burden which quantifies both morbidity and mortality; calculated by summing YLL and YLD. The cost-effectiveness is measured in $ per DALY averted.|
|Quality-adjusted life year - similar to a DALY but measures the inverse (a QALY is good and measure QALYs gained, while a DALY is bad and we measure DALYs averted). The cost-effectiveness is measured in $ per QALY gained.|
|Randomised Controlled Trial. Often considered the ‘gold standard’ of evidence. People are randomly allocated between a treatment group and a control group (who do not receive the intervention). Causality is established by comparing outcomes between the two groups.|
Giving What We Can is a project of the Centre for Effective Altruism. Its focus areas are finding charitable causes with the highest chance of impact, and communicating its research results to a wide community of members and other donors. Giving What We Can’s research team produces high-quality research focusing on the most meaningful indicators of impact and cost-effectiveness. Their robust methodology draws on the latest thinking in effective altruism, development economics, and cause prioritisation.
Mundel, Trevor. "Honing the priorities and making the investment case for global health." PLoS Biol 14.3 (2016): e1002376. ↩
Horton, Susan, and Carol Levin. "Cost-Effectiveness of Interventions for Reproductive, Maternal, Neonatal, and Child Health." Disease Control Priorities (2016). ↩
"Errors in DCP2 cost-effectiveness estimate for … - The GiveWell Blog." 2011. 17 May. 2016 <http://blog.givewell.org/2011/09/29/errors-in-dcp2-cost-effectiveness-estimate-for-deworming/> ↩
Bell, Chaim M et al. "Bias in published cost effectiveness studies: systematic review." Bmj 332.7543 (2006): 699-703. ↩
Fiedler, John L, and Chloe Puett. "Micronutrient program costs: Sources of variations and noncomparabilities." Food and nutrition bulletin 36.1 (2015): 43-56. ↩
Dhaliwal, Iqbal, Esther Duflo, Rachel Glennerster, and Caitlin Tulloch. "Comparative Cost-Effectiveness Analysis to Inform Policy in Developing Countries." In Education Policy in Developing Countries, edited by Paul Glewwe, 285-388. Chicago: University of Chicago Press, 2014. From: https://www.povertyactionlab.org/sites/default/files/publications/CEA%20in%20Education%202013.01.29_0.pdf ↩