Top-rated fund

Longtermism Fund: December 2022 Grants Report

6 min read
20 Dec 2022
Please note — since this report, the fund's name has been changed to the Emerging Challenges Fund and Giving What We Can no longer manages the fund's communications.


The Longtermism Fund is pleased to announce that we will be providing grants to the following organisations in our first-ever grantmaking round:

  • Center for Human-Compatible Artificial Intelligence ($70,000 USD)
  • SecureBio ($60,000 USD)
  • Rethink Priorities' General Longtermism Team ($30,000 USD)
  • Council on Strategic Risks' nuclear weapons policy work ($15,000 USD)

These grants will be paid out in January 2023, though the amounts were chosen based on what the Fund has received in donations as of today.

In this grants report, we will provide more details about the grantmaking process and the grantees the fund is supporting. This report was written by Giving What We Can, which is responsible for the fund's communications. Longview Philanthropy is responsible for the Fund's research and grantmaking.

Read more about the Longtermism Fund here and about funds more generally here.

The grantmaking process

Longview actively investigates high-impact funding opportunities for donors looking to improve the long-term future. This means the grants are generally decided by:

  1. Longview’s general work (which is not specific to the Longtermism Fund) evaluating the most cost-effective funding opportunities. This involves thousands of hours of work each year from their team. Read more about why we trust Longview as a grantmaker.
  2. Choosing among those opportunities based on the scope of the Fund.

In addition to this, the Fund decided to support a diverse range of longtermist causes this grantmaking round. Our current plan is to provide grant reports approximately every six months.

The scope of the Fund

The scope of the Longtermism Fund is to support organisations that are:

  • Reducing existential and catastrophic risks.
  • Promoting, improving, and implementing key longtermist ideas.

In addition, the fund aims to support organisations with a compelling and transparent case in favour of their cost-effectiveness that most donors interested in longtermism will understand, and/or that would benefit from being funded by a large number of donors.

Why the fund is supporting organisations working on a diverse range of longtermist causes

There are several major risks to the long-term future that need to be addressed, and they differ in their size, potential severity, and how many promising solutions are available and in need of additional funding. However, there is a lot of uncertainty about which of these risks are most cost-effectively addressed by the next philanthropic dollar, even among expert grantmakers.

Given this, the Fund aimed to:

  1. Provide funding to organisations working across a variety of high-impact longtermist causes, including:
    1. Improving biosecurity and pandemic preparedness
    2. Promoting beneficial AI
    3. And reducing the risks from nuclear war.
  2. Allocate an amount of funding to highly effective organisations working on these causes representative of how the fund might deploy resources across areas in the future.

A benefit of this approach is that the grants highlight a diverse range of approaches to improving the long-term future.

We expect the Fund to take a similar approach for the foreseeable future, but we could imagine it changing if there is a persistent and notable difference in the cost-effectiveness in the funding opportunities between causes. At this point, so long as those opportunities are within the Fund’s scope, funding the most cost-effective opportunities will likely outweigh supporting a more diverse range of causes.


This section will provide further information about the grantees and a specific comment from Longview on why they chose to fund the organisation and how they expect the funding to be used. Each page links to either a charity page, where you can learn more about the grantees and support them through a direct donation, or to a public writeup to describe the grantees’ work in more depth.

The Center for Human-Compatible Artificial Intelligence (CHAI) — $70,000

CHAI is a research institute focused on developing AI that is aligned with human values and will ultimately improve the long-term future for all beings. Their work includes conducting research on alignment techniques such as value learning, as well as engaging with policymakers and industry leaders to promote the development of safe and beneficial AI.

Longview: “We recommended a grant of $70,000 to CHAI on the basis of CHAI’s strong track record of training and enabling current and future alignment researchers, and building the academic AI alignment field. These funds will go toward graduate students, affiliated researchers and infrastructure to support their research.”

Learn more about The Center for Human-Compatible Artificial Intelligence.

SecureBio — $60,000

SecureBio is led by Prof. Kevin Esvelt of the MIT Sculpting Evolution Group — he is one of the leading researchers in the field of synthetic biology. In addition to contributing to developments in CRISPR, and inventing CRISPR-based gene drives, his work has been published in Nature and Science, and covered in The New York Times, The New Yorker, The Atlantic, PBS and NPR. SecureBio is working on some of the most promising biosecurity projects for tackling the scenarios most likely to cause permanent damage to the world, such as preventing bad actors from abusing DNA synthesis, infrastructure for detecting the next pandemic very early, and advanced PPE and air sanitisation.

Longview: “We recommended a grant of $60,000 to SecureBio on the basis of their intense focus on the most extreme biological risks and their track record of developing promising approaches to tackling such risks. These funds will go toward research scientists, an air sanitisation trial and setting up a policy unit.”

Learn more about SecureBio.

Rethink Priorities' General Longtermism Team — $30,000

The General Longtermism Team at Rethink Priorities works on improving strategic clarity about how to improve the long-term future, such as whether and how much to invest in new longtermist causes. They have written about their past work and future plans in a recent EA Forum post.

Longview: “We recommended a grant of $30,000 to Rethink Priorities’ research and development work in a range of established and experimental existential risk areas on the basis of Rethink Priorities’ track record of generating decision-relevant research across a range of causes, and the promise of their research directions in existential risk areas. These funds will go toward researchers (including fellows) and research support.”

Learn more about Rethink Priorities’ General Longtermist Team.

The Council on Strategic Risks’ nuclear weapons policy work — $15,000

The Council on Strategic Risks is an organisation that focuses on reducing the risk of catastrophic events, including nuclear war. Their work on nuclear issues includes conducting research on nuclear proliferation and deterrence, engaging with policymakers to promote nuclear disarmament, and raising awareness about the risks of nuclear conflict.

Longview: “We recommended a grant of $15,000 to the Council on Strategic Risks’ nuclear programmes on the basis of their focus on the most extreme nuclear risks, ability to propose concrete policy actions which would reduce the risk of escalation into nuclear war, and strong networks within U.S. national and international security communities. These funds will go toward policy development and advocacy, as well as fellowships.”

Learn more about The Council on Strategic Risks.


We are grateful to the support of the 295 donors who have raised over $175,000 USD in the fund’s first few months. We are excited to support the work of these organisations and believe that their activities are likely to have a significant positive impact on the long-term future. Please feel free to ask any questions in the comments of the version of this grants report on the EA Forum.