The Center on Long-Term Risk (CLR) works to address worst-case risks from the development and deployment of advanced artificial intelligence systems, with a current focus on conflict scenarios and the technical and philosophical aspects of cooperation.
The risks from the development and deployment of advanced AI systems pose a complex challenge. Because our resources are limited, CLR believes we need to prioritise and ask ourselves what actions we should take now to have as much of a positive impact as possible.
Some of the crucial considerations that inform CLR’s current priorities are:
To address these risks, CLR:
We don't currently have further information about the cost-effectiveness of the Center on Long-Term Risk beyond it doing work in a high-impact cause area and taking a reasonably promising approach.
Funds / Organisations you select will show up here