Top-rated charity

Center for Human-Compatible Artificial Intelligence (CHAI)

Center for Human-Compatible Artificial Intelligence (CHAI)

The Center for Human-Compatible Artificial Intelligence (CHAI) is a multi-institution research group based at UC Berkeley, with academic affiliates at a variety of other universities. CHAI aims to shift the development of AI away from potentially dangerous systems we could lose control over, and towards provably safe systems that act in accordance with human interests even as they become increasingly powerful.

What problem is CHAI working on?

Artificial intelligence research is aimed at designing machines that are capable of intelligent behaviour to successfully achieve objectives. The long-term outcome of AI research seems likely to include machines that are more capable than humans across a wide range of objectives and environments.

But if these new machines and systems are more capable than humans — and intrinsically unpredictable by humans — it stands to reason that some could result in negative, perhaps irreversible outcomes for people. In The Precipice, Toby Ord — one of the leading experts on the risks facing humanity (and a cofounder of Giving What We Can) — argues that powerful but misaligned AI is the biggest threat to humanity’s survival this century.

What does CHAI do?

CHAI aims to reorient the foundations of AI research toward the development of provably beneficial systems. Because the meaning of “beneficial” depends on the properties of humans, this inevitably includes elements from the social sciences in addition to AI.

Currently, it is not possible to specify a formula for human values in any form that we know would provably benefit humanity, if that formula were instated as the objective of a powerful AI system. In short, any initial formal specification of human values is bound to be wrong in important ways.

Therefore, much of CHAI's research efforts to date have focused on developing and communicating a new model of AI development, in which AI systems should be uncertain of their objectives, and should defer to humans in light of that uncertainty. This way of formulating objectives stands in contrast to the standard model for AI, in which the AI system's objective is assumed to be known completely and correctly.

CHAI also works on a variety of other problems in the development of provably beneficial AI systems, including:

  • The foundations of rational agency and causality
  • Value alignment and inverse reinforcement learning
  • Human-robot cooperation
  • Multi-agent perspectives and applications
  • Models of bounded or imperfect rationality
  • Adversarial training and testing for machine learning systems
  • Security problems and solutions
  • Transparency and interpretability methods

CHAI’s work so far has involved research, field-building, and thought leadership. It has:

For more information about CHAI’s recent work, see its latest Progress Report.

Why is CHAI one of our top-rated charities?

CHAI meets our criteria to be top-rated because one of our trusted evaluators, Founders Pledge, has conducted an extensive evaluation highlighting its cost-effectiveness. (Our trusted evaluators are charitable giving experts who focus on impact — their research into the best charities means your donations can do even more good. Learn more about charity evaluators we trust and why.)

Other indicators of its cost-effectiveness are its impressive research output and the fact it has received funding from an expert impact-focused grantmaker, Open Philanthropy.

At Giving What We Can, we focus on the effectiveness of an organisation’s work, which considers much more than just the administration costs of the organisation. Learn more about this common “overhead myth” and our approach to charity evaluation.

Why donate through the Giving What We Can donation platform?

Your donations through our portal are tax deductible in the UK, US, and the Netherlands. Giving What We Can does not take any fees from donors using our platform or from charities listed on our platform. We are independently funded to promote our mission of making giving effectively and significantly a cultural norm. Read more on our transparency page.

Your current selection

Funds / Organisations you select will show up here