Blog post

Words and Deeds

5 min read
Aug 19, 2015
So it turns out that philosophers who study ethics aren't actually all that ethical - this according to a recent article by Eric Schwitzgebel that I’d encourage you to read. Schwitzgebel writes about his research into the ethical activities of those who study how we should live for a living. He finds, broadly, that they don't act as morally as we might expect them to.

You might disagree with his metrics for assessing how ethical a person is (does it really matter how often you call your mother?), but I think he's hit on an important point, and one we can all recognise in ourselves.

Very often, the beliefs we claim to hold about how one ought to live one's life don't square all that well with the lives we're actually living. That's certainly true of me. In principle I agree wholeheartedly with the argument that I should do as much as I can to improve the lives of others – and continue doing so up until the point at which acting in that way begins to harm me more than it helps someone else.

But although I agree with that I definitely don't act accordingly.

Think of what that would entail in practice. I could sacrifice pretty much the entirety of my worldly possessions (including one of my kidneys), condemning myself to a life of poverty. Even at that point there might be more I could do. Because the levels of misery elsewhere in the world are so great, any sacrifice I perform is almost always going to benefit someone else more than it will inconvenience me.

That's an extreme example, but we can all think of less demanding principles that we say we believe but don’t put into practice (at least not consistently). Most of us probably believe that it's ethically important to consider the effects of our actions on others whenever possible – but often we don't.

One possible reason we don't is that we're fallible. We might firmly believe our principles, but other things get in the way – like our own selfishness.

Alternatively, it might be that instead of being fallible we're just lying to ourselves about what our principles really are – maybe, deep down, I don't really believe that I have an obligation to help others.

In either case, we're dealing with motives for action that are in some sense implicit or subconscious. We say one thing, but we do another. So, crucially, what we say isn't enough to explain what we do.

Why does this matter for effective altruism? The basic logic of effective altruism is a watered down version of the argument that I gave above – if you can, at comparatively little cost to yourself, save someone else's life (or significantly improve it), then you should. It's a reasoned argument about your ethical obligations.

The problem with that is that many people will agree wholeheartedly but won't change how they act on a day to day basis. I'm somewhat ashamed to be an example of this, too, but I don't think I'm the only one.

The point is that the force of the argument is difficult to contest, but that's not enough to persuade people to behave differently.

Now I am slightly caricaturing the effective altruism movement here. It is not just the propagation of a reasoned argument. Giving What We Can, for example, organises a variety of groups and events, in part to enhance the felt attractiveness of effective giving – such as showing those still undecided what acting on the 10% pledge feels like in practice.

However, the making of arguments is a large part of what the effective altruist community does, and I think we should consider how we might go beyond that. Because I am unconvinced that simply reaching more people with the same rational argument is going to persuade large numbers to become effective altruists. By large numbers I mean millions rather than thousands.

What we can do over and above explicit argument is to appeal to the implicit or subconscious decision making that people employ. One of the most fundamental ways of doing that is to make effective altruism the default or easier choice – the norm rather than the exception.[1]

A world in which everyone carefully considered each of their choices so as to maximise the beneficial impact they can have would be a wonderful one. But we need to acknowledge that that actually makes things harder. Most people have enough on their plate that making things more difficult for them won’t prompt them to change how they act.

So how do we make effective altruist choices easier? This is what organisations like GiveWell set out to do – in their case by giving people the necessary information to decide between different charities. But their research, while excellent, is still too voluminous and complex for most people to bother with.

We could do a lot more to make those choices more straightforward.

Here's one small example of how we might make it easier and more rewarding for people to donate to effective charities. There are a couple of big online fundraising platforms that large numbers of people use, one of which is Just Giving. On their website they claim to have channelled $3.3 billion to charity since their foundation in 2001. That's a lot.

But there's nothing (or nothing I could find) on their site about charitable effectiveness. What if GiveWell's research and ratings were incorporated, in a user-friendly manner, into the site? Or how about promoting particularly effective charities as targets for donation?

By making it easier for large numbers of people to make effective altruist choices the gains are potentially enormous.

I would argue that embedding effective altruist ideas into the landscape of day to day activity is in the long term more important than pushing the rational argument for effective altruism. The reason being quite simply that most people do not have large amounts of time or energy to devote to complex reasoned debate about these issues, despite intuitively agreeing with the basic position.

Winning the argument is easy – what we need to focus on is changing the way people act.

  1. This and much of the subsequent discussion has obvious links to recent work in behavioural economics to do with the manipulation of choice architectures - popularly known as nudging. I won’t discuss this any further here, but if you’re interested then you should take a look at Richard Thaler and Cass Sunstein’s 2008 book ‘Nudge’.