The Center on Long-Term Risk Fund supports promising projects and individuals that are working to build a global community of researchers and professionals dedicated to reducing involuntary suffering due to emerging technologies.
What problem is the Center on Long-Term Risk Fund working on?
The Center on Long-Term Risk (CLR) works to address worst-case risks from the development and deployment of advanced artificial intelligence systems, with a current focus on conflict scenarios and the technical and philosophical aspects of cooperation.
What projects does the Center on Long-Term Risk Fund support?
The CLR Fund primarily supports individuals who want to make research contributions to CLR’s current priority areas. However, it will also support other high-quality projects if its fund managers believe it will contribute to reducing risks of suffering (now or in the future).
Recent grant recipients include:
Asher Soryl — research on the ethics of panspermia.
Bogdan-Ionut Cirstea — year-long research project on short AI timelines.
Nandy Schoots — three-month research project on simplicity bias in neural nets.
What information does Giving What We Can have about the cost-effectiveness of the Center on Long-Term Risk Fund?1.
We don't currently have further information about the cost-effectiveness of the Center on Long-Term Risk Fund beyond it doing work in a high-impact cause area and taking a reasonably promising approach.
Please note that GWWC does not evaluate individual charities.
Our recommendations are based on the research of third-party, impact-focused charity evaluators our research team has found to be particularly well-suited to help donors do the most good per dollar, according to their recent evaluator investigations. Our other supported programsare those that align with our charitable purpose — they are working on a high-impact problem and take a reasonably promising approach (based on publicly-available information).