The Center on Long-Term Risk Fund supports promising projects and individuals that are working to build a global community of researchers and professionals dedicated to reducing involuntary suffering due to emerging technologies.
The Center on Long-Term Risk (CLR) works to address worst-case risks from the development and deployment of advanced artificial intelligence systems, with a current focus on conflict scenarios and the technical and philosophical aspects of cooperation.
The CLR Fund primarily supports individuals who want to make research contributions to CLR’s current priority areas. However, it will also support other high-quality projects if its fund managers believe it will contribute to reducing risks of suffering (now or in the future).
Recent grant recipients include:
For more information about how donations are allocated, see the list of past recipients and the grantmaking porcess on the CLR website.
We don't currently have further information about the cost-effectiveness of the Center on Long-Term Risk Fund beyond it doing work in a high-impact cause area and taking a reasonably promising approach.
Funds / Organisations you select will show up here