Podcast interview #4

Joshua Greene: Moral Tribes and bridging the gap between "us" and "them"

25 min read
24 Sept 2021

"Climb that evolutionary ladder but then kick it away to reach a point where we can offer everybody the same regard that our ancestors only offered the people within their tribe."

Figure

In this episode of the Giving What We Can podcast, we are joined again by renowned psychologist, Joshua Greene, for part two of our interview, where we explore his book Moral Tribes.

Audio

Video

Transcript

00:49 – Overview of Moral Tribes

Joshua Greene: The main question that Moral Tribes is asking is "what kind of a morality do we need to live on a single earth, single planet with lots of different people from lots of different groups with competing values and competing interests"? Historically, when groups of humans have had conflicts of interest or conflicts of values, they've just fought it out. In recent centuries, we've built structures, we found ways for more and more diverse peoples to get along with each other but, at the same time, we still struggle with this within nations and between nations.

And so, I came at this from first principles as a philosopher. I started thinking about philosophy when I was in my early teens when I started doing debate in high school. And I thought that utilitarianism makes a kind of sense. This is the philosophy that goes back to Jeremy Bentham and John Stuart Mill that says "Okay. Well, what really matters?" What matters is people's happiness or people's suffering, their experience and wellbeing. And when you start with anything that you care about and say "Okay but why does that matter and why does that matter?" and you keep following until you run out of answers, you end up with something about the quality of someone's lived experience. If you say like "Oh, you go to work. Well, why do you work?"

It's like "Well, I need to make money."

"Well, why do you need to make money?"

It's like "Well, I got to pay the rent."

"Well, why do you have to pay the rent?"

"Well, I don't want to like live outside or I will ..."

"Well, why not?"

"Well, it's cold."

"Well, what's wrong with being cold?"

"Well, it's painful."

"Well, what's wrong with pain?"

"Well, it's just bad," right?

At some point, if you follow that chain, you ultimately get to pain and suffering or a sort of broader inventory of positive or negative experiences, right? So, that makes sense. And then you say "Okay. Well, that's what ultimately matters." That's sort of the value, the currency behind other values.

Whose wellbeing or suffering or happiness matters? And the utilitarian answer was, at the time, a radical one, which is everyone's: everyone's suffering matters the same, everyone's happiness matters the same. And what we should be doing to make the world better is to make the world less suffering and more positive, right?

And that all made sense to me but at the same time, I learned in high school when I was a debater that there were these objections to this idea of promoting the greater good. If you have five people who all have missing organs and you could save them by kidnapping one person and carving out their organs and giving them to the other five people, would that be okay? And it seemed to me "No, that seems wrong" and it seems wrong to a lot of people.

And so, I thought "Well, on the one hand, this idea of promoting the greater good makes a lot of sense. On the other hand, it seems like there are some compelling arguments against it." And I wanted to get to the bottom of that and say "Okay. Well, which is right? Is it the greater good that should prevail or is there something right about these arguments?"

And that line of thinking got me into the psychology and ultimately the neuroscience of moral thinking. And I started thinking a lot about moral dilemmas that kind of encapsulate this tension.

A lot of the early work involved brain imaging, we were actually looking at people's brains while they were making these decisions and trying to sort of understand the competing mechanisms in the brain that make these dilemmas feel like such dilemmas.

So, after many years of thinking about this as a philosopher and then later as a scientist trying to understand how we make moral decisions, Moral Tribes was sort of the culmination of all of that.

So, I spend the first part of the book kind of laying out the basics of human morality as I understand it, which I think of as a suite of psychological tendencies that enable us to cooperate, that enable us to not just be selfish and reap the benefits of teamwork.

That sort of gets us to the morality within the tribe. And the modern problem is "Okay but what about when we have tribes that have competing interests in the same way that within a tribe, you have individuals with competing interests?"

Morality, basic morality is what enables a bunch of otherwise selfish individuals to form a tribe.

So, what do we need when we've got tribes at each other, right? And the answer is we need something like a meta morality. And what I argue is that utilitarianism relieves the right meta morality.

And I also argue that utilitarianism is a terrible word. It just completely gives people the wrong idea about what it means to try to make the world as happy as possible and alleviate as much suffering as possible.

I suggest my alternative which I call 'Deep Pragmatism' which hasn't yet really taken off but here I am. If anyone wants to call themselves the deep pragmatist, please go ahead and let me know that you're doing it ... you don't have to let me know.

Essentially you can sort of summarize it this way that when it comes to the morality of everyday life, it's fine to rely on our intuitive emotional sense of right and wrong, most of the time, but when it comes to solving global moral problems, when it comes to problems between groups that have different moral intuitions, we can't rely on our moral intuitions. We need some kind of common currency.

The argument of Moral Tribes is that our intuitions are helpful but imperfect. We sometimes need to put them aside and slow down and make certain kinds of judgements and policies based on a more reasoned approach to promoting human welfare.

The book kind of ends with a sort of small set of practical suggestions about how to do this. And one of them is you can't control everything but one thing you control is what you do with your extra cash. And I suggested that people use that to do as much good as possible with the help of GiveWell and other organizations like that. And that's part of what led me to say "Okay, it's time to start putting this stuff into action." And then that's what led to Giving Multiplier.

Luke Freeman: Excellent.

06:57 – Why are we moral beings?

Luke Freeman: I'd love for you to talk a little bit about the difference between the moral worlds of our early human ancestors compared to that about recent ancestors and those of us alive today and the kind of things are going to be facing in the future and why you think that the book and the work you've done there informs some of that.

Joshua Greene: Morality is an interesting thing because morality didn't necessarily evolve for niceness. The best explanation that we have for why we are moral beings is because of competition, that is morality evolved because it's really a device about teamwork, right?

Imagine a bunch of different human groups. And you have some groups where everybody's out for themselves as individuals and you have other groups where the individuals are willing to cooperate and work together to advance their collective interest and out-compete other groups. Well, which group do you think is going to be more successful? The group of divisive individualists or the group that really works as a team, right?

So, teamwork is an effective competitive weapon, whether it's literally competition in war or more indirect competition for resources or just the ability to stabilize and grow.

So, our basic ability to care about other people and be willing to offer someone else some food when I have some and you don't or jump in a river and pull you out when you're drowning and hope that you might do the same for me or for my child if given the opportunity, those capacities evolved because of this value of teamwork.

So, if morality is ultimately about competition, then does that mean that we can never form one ultimate group or can we do better than that? And my hope as both a scientist and as a philosopher is that we can adjust our thinking for the modern world. So, yes, morality evolved as a device for teamwork within groups for competition between groups but we can take that basic psychology and decide that we want to scale it up, that we don't have to follow the evolutionary logic that gave us morality in the first place.

An example of climbing the evolutionary ladder and kicking it away in a different domain but related is like birth control. How did we humans end up with birth control? The directive of natural selection is: "make as many copies of yourself as you can and survive". It's reproduce, reproduce. And yet, here we invented this thing that enables us to not reproduce as much as we otherwise could.

So, evolution gave us these big clever brains that allow us to engage in general purpose problem solving. It also gave us some desires to have and rear children but it also gave us some desires to have sex that are not related to rearing children. And so, we very clever apes said "Hey, If we do this, that, and the other thing and make this pill, we can have sexual activities that we enjoy but without having more and more and more and more children."

Now, if evolution could speak, they would say "No, that's the worst thing in the world that you could possibly do." And our response to that is "Tough luck. This is what we want," that we don't have to adopt evolution's values as our values. We can say "You know what? Having zero or one or two children is fine even if we could support 10 children with the resources that we have."

And in the same way, even if morality evolved for in-group cohesion and between-group competition, there's no reason why we can't say "All right but what really makes sense when we think about it is valuing everybody's wellbeing." And so, we should climb that evolutionary ladder but then kick it away to reach a point where we can offer everybody the same regard that our ancestors only offered the people within their tribe, passing the baton biological evolutionary imperatives to cultural ones that reflect a kind of values that's grounded in our nature but not bound by our nature.

11:06 – How big can our moral tribes be?

Luke Freeman: Do you think that we have natural limits to how big our moral tribes can be or do you see that there is a path to a maximally big tribe that's quite expansive?

Joshua Greene: I would say that there are obstacles and there are tensions but I don't want to say that there are necessarily any limits on that.

I mean, if you look at a modern city and imagine someone from 5,000 years ago looking at that and saying "What? You know, you live in a world where there is enough food to feed everybody and there are resources available for people from all over the world to get an education and do the things that they find to be fulfilling and people from tribes that were historically trying to kill each other now working comfortably together," it would be like almost impossible for someone 5,000 years ago to imagine a prosperous and relatively integrated place like the successful and prosperous cities and nations that we have here on earth.

So, we tend to focus on everything that's wrong with the world today. And there is a lot wrong with it and I am not for a moment denying that there is terrible injustice and that we should be working hard to overcome it but, at the same time, I see a lot of progress in our history and I don't see any reason why it's impossible that we can't make the same kind of progress in the future that we have made in the past.

12:37 – Responses to Moral Tribes

Luke Freeman: What kind of response have you received with your work?

Joshua Greene: One of the great joys of being a scientist is you put something out there, you describe a theory, present some results in support of it and then other people say "Oh, I get that idea. And I bet if that theory is right, then these patients with this kind of neurological damage, I predict that they would respond this way instead of that way. And people with this other neurological disorder might give the opposite response." Labs around the world sort of took this idea and really ran with it and provided a lot of the most compelling evidence for the ideas that we were initially testing.

Of course, science is science and once that idea becomes popular, people then are especially inclined to challenge it. And so, there've been people who criticized it and proposed alternative theories. Overall, I think the original ideas have survived pretty well but certainly with some interesting modifications and extensions.

On the philosophical side, I can't say that there has been a sort of utilitarian revolution in philosophy as a result. I mean, part of it, I think, is I was defending a conclusion that had already been defended by other well-known historical figures. So, it wasn't a novel conclusion but more of a different way of getting there.

I think one of the challenges in philosophy is there's this idea, which I actually think is correct, that you can't derive an ought from an is. Science is about "is", describing what's going on in the world and why, whereas morality is about "ought", it's about how the world ought to be or how people ought to behave, not how they do. And so, there's the simple-minded version of that that says "And that means that all science is irrelevant because is over here, ought over there." And so, there was a kind of, I think, kneejerk response among a lot of philosophers to think: "how could this possibly be relevant?". And instead of reading closely and then saying "Oh, I see how this could be relevant," they just imagined the worst version of the argument and assumed that that must be it.

When I talk to philosophers, often I have to explain to them "No, I'm not making the dumb argument that you've imagined that I've made. It's more complicated than that" but I think there have been others who have gotten it. And I think the idea that understanding ourselves is relevant to moral thinking shouldn't be such an outlandish idea.

It would be one thing if the way philosophers arrived at their moral conclusions was with formal proofs, deducing these principles, conclusions from first principles and where it's just like a math problem. And it doesn't matter what your psychology is like if you're doing math, you don't have to get into the psychology of mathematics. You just look at the proof and ask "Does each step in the proof follow from the last until you get to the conclusion?".

And if moral philosophy were like that, then psychology would be irrelevant but what happens in moral philosophy is we rely on our intuitions. We say "Well, that's clearly wrong." And anytime people say "clearly", you know, that they're relying on their intuitions. Well, anytime you're relying on your intuition, you're essentially making a leap from a psychological is to a moral ought. You're saying "It is the case that I have this feeling. And therefore, you ought not do that," right? Anytime you're relying on intuition, you're already making a kind of is-ought leap.

And what science can do is it can say "Look, what's behind your intuition? Where is it coming from? And what's the best explanation that we have based on our understanding of the psychological and biological history of how we got to have these feelings of why you feel that way?" And then that gives us some insight into "Okay, but does it really make sense to trust those feelings or should we perhaps look for a more abstract and maybe less emotionally satisfying but systematic approach to answering those moral questions?" And that's essentially what I've been arguing.

Luke Freeman: Yeah. So, Giving What We Can is part of the effective altruism community, which is looking at how we can use our resources to do the most good. I'd be interested in your thoughts on utilitarianism and kind of its relationship with effective altruism and what the overlaps might be.

16:53 – Relationship between utilitarianism and effective altruism

Joshua Greene: One of the things that I really like about the effective altruism movement is that it doesn't start with utilitarianism. I think the idea of "If you're going to try to help, help as effectively as possible. And if you're in a really good position to help, shouldn't you be helping?", that's a very broadly appealing idea whereas utilitarianism is much more abstract and it is much more susceptible to tough counterarguments require even more complicated counter-counterarguments to counter.

Why didn't I invent effective altruism, you know, when I was an undergrad and first thinking about these things? I think it's because I was thinking "Well, step one, convince people that utilitarianism is right. Step two, put it into action." And I think part of the genius of effective altruism is it just skips right over step one. It just says "Look, you may have this kind of more encompassing philosophical view that a lot of us share but you don't have to." You can just say "I'm incredibly fortunate to have these resources. I want other people to have the advantages that I've had and to not suffer unnecessarily and I want to use my resources to do as much good as I can." You can buy into that without buying into this complicated and controversial moral philosophy.

So, I applaud the effective altruism movement for skipping over the philosophy, at least not going there as the first step, but then I think it's also important or helpful to say "Look, if you want to understand where did these ideas come from and how might this fit into an attempt to have a more coherent encompassing worldview, now it's time to have a philosophical discussion." I think what EA did was sort of get the order right for practical purposes, which is the opposite of the order that I was thinking about as a young 20-something Philosophy student.

Luke Freeman: Yeah. Thank you. In your conversation with Sam Harris and Will MacAskill, you talked about taking an incremental approach and beating your personal best in the face of this Singerian Thought Experiment. Can you share more about this?

19:08 - Beating your personal best

Joshua Greene: Effective altruism is sort of wonderfully fulfilling in the sense that you realize that you have this ability as a relatively ordinary person with some disposable income to just dramatically improve people's lives. Will MacAskill talks about if you could run into a burning building and pull somebody out and save their life, that would be a highlight of your life if you're able to be that kind of hero. And when it comes to the consequences of effective giving, you can be that hero every year. I mean, it doesn't feel as heroic as putting your life at risk like that but you can at least say "I'm really happy that I'm able to do this."

Where it gets tough is where do you stop? And this is a common argument that's made against this is "Well, if you have an obligation to give the first thousand dollars, why not the second, why not the third?" There's an unlimited supply of people who can benefit much more from your resources than you can, at least up to the point where giving more would impede your ability to give even more in the future, which is pretty, pretty far.

One approach is to beat yourself up for failing to live up to that ideal but that doesn't really make a lot of sense. Nothing good comes from beating yourself up that way. And you say "Well, then what's the alternative? Do you just throw up your hands and say "Forget it. I can't live up to the ideal. So, I'm just going to do nothing?" Well, now we're back to where we started.

Charlie Bresler who runs The Life You Can Save, the organization that Peter Singer started after his book, has a nice phrase which is "personal best". The right way to think about it is not "You have to do everything" or "It's fine to do nothing" but "Do what you feel comfortable with to start?" And then once that feels comfortable and maybe your income has grown or you just have rethought your needs and you don't need as much money as you thought, then you say "Okay. Well, I'll see if I can do a little bit more. And see if I can do a little bit more." And that approach just makes so much more sense that you don't have the despair of "I'm failing to live up to this ideal" or the despair of "It's impossible. I can't do this." Instead, you just do what you can and try to do better.

I think that that's a much healthier and in the long run more sustainable and spreadable approach. If you say "Look, I live a happy fulfilling life. I don't feel like I'm deprived but I use a substantial amount of my income to help people who are much less fortunate than me and who can benefit so much more than I can from these resources," that not only, I think, is admirable, but it's emulatable. That's something that you can pass on to your children and your friends. The best sort of altruistic culture is the most sustainable one.

Luke Freeman: I've heard many times from people in the Giving What We Can community actually picking a "Schelling Point" to aim at, has been a kind of nice way of resolving moral demandingness for them to kind of go "Here's a reasonable amount that's significant but not unsustainable for many people".

22:27 – How "trolley problems" accidentally took off

Luke Freeman: I would be interested in how you feel about the pop culture representations of kind of pseudo-utilitarian characters like Spock or often in many cases villains. And do you see that changing?

Joshua Greene: I think, there was like some Batman movie or something like that where the villain is basically creating a trolley problem -- "Do you save this person or save these people? Ha, ha, ha, ha, ha."

The hero is never going to say "Okay, save more people" or "Let them die and save the person that I care about." Instead, they're going to find a way to argue their way out of the dilemma or to fight their way out of the dilemma.

And what I find when I pose these dilemmas to people, they'll work as hard as they can to dismantle the dilemma so they don't have to face it so that there isn't a tension between the harm and the greater good but I think one thing that I regret about my own career as a scientist is that I got interested in trolley dilemmas, you know, "kill one person to save five people," because I was interested in the psychology of objections to utilitarianism.

So, I went out of my way to look at utilitarianism in its least appealing moments to try to understand the psychology against it and see if when we look at the psychology against it, the feeling that it's wrong to kill one to save five people, does that feeling reflect a really deep kind of deontological truth or is it an over-generalization of an otherwise good principle like "Don't kill people?".

And so, that's what I was up to. I was focusing on utilitarianism at its least appealing for honest philosophical reasons. When I started this, I wasn't thinking this was going to really take off and become something that like tons of people in psychology and maybe outside of psychology learn about. And for a lot of people, their introduction to the idea of utilitarianism was pushing people in front of trolleys and I was like "Oh no, what have I done?" and I was like "That was never my intention."

It was kind of, on the one hand, this attempt to be a good honest philosopher and face the argument against your view head on and also an interesting kind of psychological puzzle where you look at these weird cases in order to tease apart the competing cognitive mechanisms. So, they're kind of like visual illusions where it's not a typical thing that you'd look at but it's constructed in such a way that it tells you something about your visual system.

And so, I had sort of weird psychological reasons and weird philosophical reasons for focusing on these sacrificial dilemmas that's people now call them but it's kind of done a lot of unnecessary bad press for utilitarianism that there's been so much focus on this, whereas in real life, being a utilitarian is much more about being generous in a thoughtful evidence-based way and thinking carefully about what you eat and what kind of a career you have. And it's not about sacrificing people in transportation dilemmas.

I'm not sure what I would do differently but that really is a regret that I had, that my research has put so much of a focus on killing people for the greater good.

Luke Freeman: I really appreciated the things you were able to tease out in that, so things like instrumental harm as means to an end and also being the person who was involved in the action and just like the dramatic difference.

25:47 – Different versions of trolley dilemmas

Joshua Greene: So, some of the early experiments we did, and this was not like anything high-tech brain-imaging stuff, this was just giving people lots of slightly different versions of trolley dilemmas and seeing what our intuitions are sensitive to, and it seems like there's sort of two main factors that interact.

One is what we call personal force. So, if you ask people "Is it okay to push somebody with your hands in front of the trolley to save five people?", people say no. "Is it okay to push somebody with a pole?", still no. "Is it okay to hit a switch that opens a trap door that will drop somebody in front of the trolley and save five people?", there are a lot more people who will say yes. And then they'll also say yes if you're hitting a switch and you're farther away but the spatial distance doesn't make a big difference.

So, it's something like pushing versus hitting a switch but what we also found is that if the pushing is not instrumental, then it doesn't have the same effect. One of the cases we gave people was, let's say you can save five people by hitting a switch but you have to run across this narrow bridge to get to the switch. And if you do that, you're going to knock this person off this bridge. There people will say it's mostly okay to run to the switch even if it means you're going to knock somebody off as long as you save five people. You're pushing them with your muscles, your body, it's very close and personal but their death is not a means to an end. It's a foreseen side-effect. It's collateral damage, as people say.

And then you can have other cases where it is a means but there's no personal force and it doesn't have much of an effect. So, it was one of the early sort of fun set of scientific mysteries that we worked out was isolating these factors and showing how they interact with each other.

My colleague Fiery Cushman has really done a brilliant job of understanding the learning mechanisms and emotional training mechanisms that give rise to these feelings.

If you ask people to think about the consequences of performing some harmful action, how bad they feel about the consequences doesn't predict what they'd say about these moral dilemmas but if you ask them "Imagine being an actor and you have to act at a scene in which you, you know, take a knife and stab somebody like this. How bad do you feel about just imagining that action?". Imagining it does predict what people say about the moral dilemma.

So, it's how we attach emotions to actions in particular contexts. If people are doing a kind of decision-making maze where there are different options that you pick and they lead to like different rooms that then can lead to other rooms that will have like rewards or punishments and so you can learn to associate certain actions with certain outcomes over time but then you can be in a situation where you learn actually, if you do the thing that normally leads to something bad, it'll lead to something good or if you do the thing that normally leads to something good, it'll lead to something bad but I'm telling you there's this other pathway that will get you to the good thing.

The people who are relying more on their feelings about what has worked well in the past ... And this is not a moral context at all. This is essentially like navigating a maze and getting little bits of cheese or electric shocks. The people who take a more model free approach, that is where they're not mapping the whole situation, they're just "How do I feel about performing this action in this context based on my past experience?", that predicts people's moral judgments in trolley sorts of cases.

So, it's really these very general principles of learning and feeling that apply outside the moral domain as well as in it that are sort of guiding our judgements in these cases.

Luke Freeman: I think it also reaffirms that there's this big context shift that's going on. A lot of the time we're operating in contexts we're quite familiar with and a lot of the intuitions that we have around what is kind of virtuous or what's a good rule and all that type of thing, things that work really well at scale, but there are these rare instances where we happened to find ourselves in a world where there is this kind of massive global inequality or ruining the environment of our planet or like all these things which are just so distant and you need to be very methodical when you're thinking about it.

Joshua Greene: Yeah. And it's right. The character traits that make you nice or a good person, there's overlap, but I think they're different from those traits that would make you actually have the most positive impact on the world because of these weird modern circumstances where just because you live in an affluent country and make what's locally considered a decent living, devoting your resources to strangers does so much more good if you do it in the right way than doing what is considered the typical human good person thing to do.

31:08 – Closing Thoughts

Luke Freeman: Well, this has been really wonderful. I really appreciate your time. Any closing thoughts you'd like to add?

Joshua Greene: It's a wonderful opportunity to be here. It's been great talking to you and great to have the opportunity to sort of share these thoughts with the Giving What We Can community. There's no group of people who I admire more than the people who've made this commitment to making the world as much better as they possibly can. I hope that Giving Multiplier can be a useful tool for bringing more people into this circle of effective giving and doing as much good as possible. Looking forward to seeing what people do with it.

So, thanks so much for giving me the chance to share these ideas.