Center for Human-Compatible Artificial Intelligence (CHAI)
Charity

Center for Human-Compatible Artificial Intelligence (CHAI)

AI Safety Research and Field Building

CHAI (a multi-institution research group based at UC Berkeley) aims to shift the development of AI away from potentially dangerous systems we could lose control over, and towards provably safe systems that act in accordance with human interests even as they become increasingly powerful.

What problem is CHAI working on?

Artificial intelligence research is aimed at designing machines that are capable of intelligent behaviour to successfully achieve objectives. The long-term outcome of AI research seems likely to include machines that are more capable than humans across a wide range of objectives and environments.

But if these new machines and systems are more capable than humans — and intrinsically unpredictable by humans — it stands to reason that some could result in negative, perhaps irreversible outcomes for people. In The Precipice, Toby Ord — one of the leading experts on the risks facing humanity (and a cofounder of Giving What We Can) — argues that powerful but misaligned AI is the biggest threat to humanity’s survival this century.

What does CHAI do?

CHAI aims to reorient the foundations of AI research toward the development of provably beneficial systems. Because the meaning of “beneficial” depends on the properties of humans, this inevitably includes elements from the social sciences in addition to AI.

Currently, it is not possible to specify a formula for human values in any form that we know would provably benefit humanity, if that formula were instated as the objective of a powerful AI system. In short, any initial formal specification of human values is bound to be wrong in important ways.

Therefore, much of CHAI's research efforts to date have focused on developing and communicating a new model of AI development, in which AI systems should be uncertain of their objectives, and should defer to humans in light of that uncertainty. This way of formulating objectives stands in contrast to the standard model for AI, in which the AI system's objective is assumed to be known completely and correctly.

CHAI also works on a variety of other problems in the development of provably beneficial AI systems, including:

  • The foundations of rational agency and causality
  • Value alignment and inverse reinforcement learning
  • Human-robot cooperation
  • Multi-agent perspectives and applications
  • Models of bounded or imperfect rationality
  • Adversarial training and testing for machine learning systems
  • Security problems and solutions
  • Transparency and interpretability methods

CHAI’s work so far has involved research, field-building, and thought leadership. It has:

For more information about CHAI’s recent work, see its latest Progress Report.

What information does Giving What We Can have about the cost-effectiveness of CHAI?1.

We previously included CHAI as one of our recommended charities because the impact-focused evaluator Founders Pledge conducted an evaluation highlighting its cost-effectiveness. Other indicators of its cost-effectiveness are its impressive research output and the fact it has received funding from an expert impact-focused grantmaker, Open Philanthropy.

We’ve since updated our recommendations to reflect only organisations recommended by evaluators we’ve looked into as part of our 2023 evaluator investigations; while we expect to soon look into Founders Pledge as part of this more in-depth evaluator research, we haven’t yet. As such, we don't currently include CHAI as one of our recommended programs but you can still donate to it via our donation platform.

Please note that GWWC does not evaluate individual charities. Our recommendations are based on the research of third-party, impact-focused charity evaluators our research team has found to be particularly well-suited to help donors do the most good per dollar, according to their recent evaluator investigations. Our other supported programs are those that align with our charitable purpose — they are working on a high-impact problem and take a reasonably promising approach (based on publicly-available information).

At Giving What We Can, we focus on the effectiveness of an organisation's work -- what the organisation is actually doing and whether their programs are making a big difference. Some others in the charity recommendation space focus instead on the ratio of admin costs to program spending, part of what we’ve termed the “overhead myth.” See why overhead isn’t the full story and learn more about our approach to charity evaluation.