We are on a mission to create a world in which giving effectively and significantly is a cultural norm. Our research recommendations and donation platform help people find and donate to effective charities, and our community — in particular, our pledgers — help foster a culture that inspires others to give.
In this impact evaluation, we examine cost-effectiveness from 2020 to 2022 in terms of how much money is directed to highly effective charities due to our work.
We have several reasons for doing this:
This evaluation reflects two months of work by the GWWC research team, including conducting multiple surveys and analysing the data in our existing database. There are several limitations to our approach — some of which we discuss below. We did not aim for a comprehensive or “academically” correct answer to the question of “What is Giving What We Can’s impact?” Rather, in our analyses we are aiming for usefulness, justifiability, and transparency: we aim to practise what we preach and for this evaluation to meet the same standards of cost-effectiveness as we have for our other activities.
Below, we share our key results, some guidance and caveats on how to interpret them, and our own takeaways from this evaluation. GWWC has historically derived a lot of value from our community’s feedback and input, so we invite readers to share any comments or takeaways they may have on the basis of reviewing this evaluation and its results, either by directly commenting or by reaching out to firstname.lastname@example.org.
Our primary goal was to identify our overall cost-effectiveness as a giving multiplier — the ratio of our
We estimate our giving multiplier for 2020–2022 is 30x, and that we counterfactually generated $62 million of value for highly effective charities.
We were also particularly interested in the average lifetime value that GWWC contributes per pledge, as this can inform our future priorities.
We estimate we counterfactually generate $22,000 of value for highly effective charities per GWWC Pledge, and $2,000 per Trial Pledge.
We used these estimates to help inform our answer to the following question: In 2020–2022, did we generate more value through our pledges or through our non-pledge work?
We estimate GWWC caused $19 million in donations to highly effective charities from non-pledge donors in 2020–2022.
These key results are arrived at through dozens of constituent estimates, many of which are independently interesting and inform our takeaways below. We also provide alternative conservative estimates for each of our best-guess estimates.
This section provides several high-level caveats to help readers better understand what the results of our impact evaluation do and don’t communicate about our impact.
We generally looked at average rather than marginal cost-effectiveness
Most of our models are estimating our average cost-effectiveness: in other words, we are dividing all of our benefits by all of our costs. We expect that this will not be directly indicative of our marginal cost-effectiveness — the benefits generated by each extra dollar we spend — and that our marginal cost-effectiveness will be considerably lower for reasons of diminishing returns.
We try to account for the counterfactual
This evaluation reports on the value generated by GWWC specifically. To do this, we estimate the difference in what has happened given GWWC existed, and compare it to our best guess of what would have happened had we never existed (the “counterfactual” scenario we are considering).
We did not model our indirect impact
For the purpose of this impact evaluation, we focused on Giving What We Can as a giving multiplier. Our models assumed our only value was in increasing the amount of donations going to highly effective charities or funds. While this is core to our theory of impact, it ignores our indirect impact (for example, improving and growing the effective altruism community), which is another important part of that theory.
Our analysis is retrospective
Our cost-effectiveness models are retrospective, whereas our team, strategy, and the world as a whole shift over time. For example, in our plans for 2023 we focus on building infrastructure for long-term growth and supporting the broader effective giving ecosystem. We think such work is less likely to pay off in terms of short-term direct impact, so we expect our giving multiplier to be somewhat lower in 2023 than it was in 2020–2022.
A large part of our analysis is based on self-reported data
To arrive at our estimates, we rely a lot on self-reported data, and the usual caveats for using self-reported data apply. We acknowledge and try to account for the associated risks of biases throughout the report — but we think it is worth keeping this in mind as a general limitation as well.
The way we account for uncertainty has strong limitations
We arrived at our best-guess and conservative multiplier estimates by using all of our individual best-guess and conservative input estimates in our models, respectively. Among other things, this means that our overall conservative estimates underestimate our impact, as they rely on many separate conservative inputs being correct at the same time, which is highly unlikely.
We treated large donors’ donations differently
For various reasons, we decided to treat large donors differently in our analysis, including by fully excluding our top 10 largest GWWC Pledge donors from our model estimating the value of a new GWWC Pledge. We think this causes our impact estimates to err slightly conservatively.
We made many simplifying assumptions
Our models are sensitive to an array of simplifying assumptions people could disagree with. For instance, for pragmatic reasons we categorised recipient charities into only two groups: charities where we are relatively confident they are “highly effective,” and charities where we aren’t.
We documented our approach, data, and decisions
In line with our aims of transparency and justifiability, we did our best to record all relevant methodology, data, and decisions, and to share what we could in our full report, working sheet, and survey documentation. We invite readers to reach out with any requests for further information, which we will aim to fulfil insofar as we can, taking into account practicality and data privacy considerations.
Below is a selection of our takeaways from this impact evaluation, including implications that could potentially result in concrete changes to our strategy. Please note that the implications — in most cases — only represent updates on our views in a certain strategic direction, and may not represent our all-things-considered view on the subject in question. As mentioned above, we invite readers who have comments or suggestions for further useful takeaways to reach out.
Our giving multiplier robustly exceeds 9x
New GWWC Pledges likely account for most of our impact
We found an increase in recorded GWWC Pledge donations with time
A small but significant percentage (~9%) of our Trial Pledgers have gone on to take the GWWC Pledge, and this represents the bulk of the value we add through the Trial Pledge
The vast majority of our donors give to charities that we expect are highly effective
Our donations follow a heavy-tailed distribution
Nearly 60% of our donors’ recorded donations go to the cause area of improving human wellbeing
We plan to report on how we have used these takeaways in our next impact evaluation, both to hold ourselves accountable to using them and to test how useful they turned out to be.
If you are interested in supporting GWWC’s work, we are currently fundraising! We have ambitious plans, and we’re looking to diversify our funding and to extend our runway (which is currently only about one year). For all of this, we are looking to raise ~£2.2 million by June 2023, so we would be very grateful for your support. You can read our draft strategy for 2023, make a direct donation, or reach out to our executive director for more information.
We’d like to thank the many people who provided valuable feedback — including Anne Schulze, Federico Speziali, Basti Schwiecker, and Jona Glade — and in particular Joey Savoie, Callum Calvert, and Josh Kim for their extensive comments on an earlier draft of this evaluation. Thank you also to Vasco Amaral Grilo for providing feedback on our methodology early on in our process, and for conducting his own analysis of GWWC, which we will link to here once it’s published.
We’d also like to give a special thank you to our colleagues Fabio Kuhn and Luke Freeman for their extensive support on navigating our database, to Luke also for support on our survey design, to Katy Moore for high-quality review and copy editing, to Nathan Sherburn for helping us compile and analyse our survey results, and to Bradley Tjandra for spending multiple working days (!) supporting the data analysis for this evaluation. Both Nathan and Bradley conducted this work as part of The Good Ancestors Project.