This page presents the results from Giving What We Can’s 2025 iteration of our ongoing project to evaluate impact-focused evaluators and grantmakers (which we collectively refer to as ‘evaluators’). We use this project to decide which evaluators we rely on for our giving recommendations and to advise our cause area funds.
In the 2025 round, we completed two evaluations that informed our donation recommendations in global health and wellbeing for the 2025 giving season and beyond. Specifically, we evaluated:
As with our 2023 and 2024 rounds, there are substantial limitations to these evaluations, but we nevertheless think that they are a significant improvement to a landscape in which there were previously no independent evaluations of evaluators’ work.
On this page, we:
In future iterations of this project, we aim to conduct new evaluations of our existing vetted evaluators, re-evaluate evaluators we do not yet rely on and expand beyond the eight evaluator programmes we’ve covered so far. We also hope to further improve our methodology.
For more context on why and how we evaluate evaluators, see our main evaluating evaluators page.
Based on our evaluation, we have decided to continue including GiveWell's Top Charities, Top Charities Fund and All Grants Fund in GWWC's list of recommended programmes and to continue allocating a portion of GWWC's Global Health and Wellbeing Fund to GiveWell's All Grants Fund. As GiveWell met our bar in our 2023 evaluation, our task was to determine whether their evaluation quality had been maintained and whether there were significant issues we had previously missed. Our decision is based on two main considerations:
We also note GiveWell's progress across all three areas for improvement we identified in our 2023 evaluation — transparency and legibility, forecast reviews, and uncertainty handling — demonstrating their commitment to continuous improvement. Whilst we think there remains room for further improvement, particularly in the legibility of grant evaluations and justification of subjective inputs, we continue to think that GiveWell's reputation for providing high-quality recommendations and grants is justified, and we expect them to maintain their position as a leading source of impact-focused recommendations in global health and wellbeing.
For more information, please see our 2025 evaluation report for GiveWell.
Following our 2025 investigation of the Happier Lives Institute (HLI), we've decided not (yet) to include HLI's recommended charities in our list of recommended charities and funds and do not plan to allocate a portion of GWWC's Global Health and Wellbeing Fund budget to HLI's recommended charities at this point. However, this was an unusually difficult decision, and reasonable people could disagree with our conclusion.
HLI is filling an important gap — identifying opportunities for donors who strongly prioritise life improvements over life-saving benefits. We found much to appreciate:
Despite these strengths, we couldn't confidently conclude that HLI's process reliably identifies opportunities at least as cost-effective as GiveWell's for donors with a life-improving focus:
We emphasise: our conclusion is consistent with HLI's charities being highly cost-effective and potentially even more cost-effective than GiveWell Top Charities by HLI’s worldview. Our concern is about process reliability, not individual charity quality. We'll continue considering HLI's recommendations for our 'other supported programmes', and we believe their recommendations are worth consideration by donors — particularly those with strong life-improving preferences.
We're optimistic about HLI's trajectory and look forward to re-evaluating them in a future round of evaluating evaluators.
For more information, please see our 2025 evaluation report for HLI.
Because we decided to not (yet) rely on the Happier Lives Institute, GiveWell — whom we re-evaluated in 2025 — remains the only evaluator we rely on the global health and wellbeing cause area. As such, we plan to continue:
As discussed above, a key goal for our evaluations project was to decide which evaluators to rely on for our recommendations and grantmaking. We were additionally interested in providing guidance to other effective giving organisations, providing feedback to evaluators, and improving incentives in the effective giving ecosystem.
For each evaluator, our evaluation aimed to transparently and justifiably come to tailored decisions on whether and how to use its research to inform our recommendations and grantmaking. Though each evaluation is different — because we tried to focus on the most decision-relevant questions per evaluator — the general process was fairly consistent in structure:
Note: we were highly time-constrained in completing this second iteration of our project. On average, we spent about 20 researcher-days per organisation we evaluated (in other words, one researcher spending four full work weeks), of which only a limited part could go into direct evaluation work — a lot of time needed to go into planning and scoping, discussion on the project as a whole, communications with evaluators, and so on.