If you’d like to protect future generations (rather than strictly helping individuals living in the present), you should consider how to safeguard the long-term future. Most of the work in this area involves preventing global catastrophes and making our world more resilient.
We all care about the future. We want the people who come after us — our children, our children's children, and so on — to have good lives. But this might not happen. Nuclear catastrophes, runaway climate change, and engineered pandemics are just some of the threats to future generations. The scale of these threats is hard to imagine, as they affect not only everyone alive today, but everyone who could ever live.
Relative to this enormous scale, the problem is neglected. One of the world's leading experts on the risks facing humanity is Toby Ord, a philosopher at Oxford and a co-founder of Giving What We Can. In his book The Precipice, he argues that one of the biggest risks to humanity comes from engineered pandemics — think COVID, but designed to be far worse. Yet, despite the severity of this risk, the international body responsible for the continued prohibition of bioweapons has an annual budget of just $1.4 million USD — less than the average McDonald's restaurant.
Addressing risks to our future is the least tractable of our high-priority cause areas. Though there are ways of making progress, it is difficult to know how effective our actions will be. Nonetheless, given the scale of the problem and how little support is given to some of the most significant risks, we still think safeguarding the long-term future is a high-priority cause area.
We recommend some of the best charities and organisations working on this problem.
There are several ways you can help safeguard the long-term future. We've provided research reports into four causes that we think are especially promising.
We are still learning about how dangerous climate change might be for our future. What we do know is that a significant fraction of the CO2 we release stays in the atmosphere for millennia and that this comes at the cost of making the planet a less hospitable place for its inhabitants. We recommend supporting the innovation of clean energy and advocating for policies that will improve our climate.
COVID-19 has made us all familiar with how devastating a pandemic can be, but despite how damaging COVID-19 has been, there is a possibility that a future pandemic could be far worse. Preparing for future pandemics, especially those that pose a risk to humanity's survival or long-term potential, is one of the best ways we have to safeguard the long-term future.
Artificial intelligence might be the most important technology we ever develop, so it's vital it's developed safely and used equitably. What's particularly concerning is that if artificial intelligence is developed unsafely, it could pose an existential risk — permanently limiting the value of humanity, and maybe ending it. To address this concern, as well as other issues around artificial intelligence, we recommend supporting technical work that focuses on ensuring artificial intelligences reliably behave in ways we want and expect, and political work that ensures artificial intelligence is developed and used in the interests of everyone.
Nuclear weapons are a recent development in human history, and it's only since their development that humanity has controlled a technology that could lead to its own destruction. Improving nuclear security involves taking actions that lower the risks of nuclear war, or mitigating the negative consequences of a nuclear fallout. In our cause page, we outline several plausible ways we could do this, but due to the inherently complex nature of the problem it is difficult to know which is best to support.
Learn more about improving nuclear security.
Though we think safeguarding the long-term future is among the highest-priority cause areas we could work on, there are reasons you might choose not to focus on it.
Compared to our other high-priority cause areas, there is a weaker evidence base for interventions that could safeguard the long-term future. This is partly due to the nature of the problem and the solutions they require. We can only receive feedback that any given interventions failed to prevent an existential catastrophe once. However, concerns over the evidence base do not apply to all activities within this cause area. For example, there is strong evidence that climate change is going to cause serious damage to the planet and its future inhabitants, and there are tractable ways of making progress on the problem. However, the nature of new (and more neglected) risks, like powerful but out-of-control artificial intelligence, make it especially difficult to know how likely any given intervention is to make progress on the problem.
Depending on your values and worldview, you may choose to prioritise areas where you feel confident that your actions are making a positive difference. This might still mean prioritising the better-understood risks, like climate change, or focusing on a different cause area, like global health and development. Alternatively, you might think that the best response to this lack of evidence is to support further research.
You may doubt that future generations are as important as we suggest. Perhaps you think the welfare of future people matters less than people alive today. Though we think that future generations matter and their welfare deserves equal consideration, this is ultimately a question about your values.
Alternatively, you may value future people, but believe that the future doesn't seem likely to be particularly big or especially good. These questions come down to your worldview. You can read more about the potential size of the future in section three of The Case for Strong Longtermism .
We recommend giving to Founders Pledge's top charities safeguarding the future and addressing climate change. We also recommend funds that support a variety of projects designed to safeguard the long-term future. Funds allow donors to pool their resources so they can be allocated by expert grantmakers and charity evaluators.
- Johns Hopkins Center for Health Security
- Center for Human-Compatible Artificial Intelligence
- The Nuclear Threat Initiative (Biosecurity Program)
- The Clean Air Task Force
- Effective Altruism Funds' Long-Term Future Fund
- Founder's Pledge's Climate Change Fund
There are other ways to help. Addressing these issues often requires in-depth knowledge about the particular risks. You can check out the 80,000 Hours Job Board to learn more about some of the job opportunities in this space, and read their career guide to learn more about how you can build the skills needed to help safeguard the future.
To learn more about safeguarding the long-term future, we recommend the following resources:
- The Precipice (Toby Ord)
- What We Owe the Future (Will MacAskill)
- The Long-Term Future (Jess Whittlestone)
- Biosecurity and Pandemic Preparedness (Open Philanthropy)
- Potential Risks from Advanced Artificial Intelligence (Open Philanthropy)
- The Case for Reducing Existential Risks (80,000 Hours)
- A Full Syllabus on Longtermism (EA Forum)
This page was written by Michael Townsend.
Please help us improve our work — let us know what you thought of this page and suggest improvements using our content feedback form.
However, it's not clear that focusing on near-term issues alleviates our uncertainty about whether we're having a positive impact. This is because actions addressing near-term issues still have indirect (unintended) long-term consequences. Just as a butterfly flapping its wings may cause a tornado, supporting a particular intervention in global health and development may have long-term political, economic and social ramifications. This might make you morally clueless — meaning whether you believe an action is good or bad almost entirely depends on your (highly uncertain) views about the action's indirect consequences. Read more about moral cluelessness . ↩︎