‘m reading the research work of Jonathan Haidt, the University of Virginia psychologist, who says that our political resentments, disgusts, and outrage are rarely supported by fully developed arguments and deliberation. The research he presents has implications for philosophy, anthropology, psychology, and even the culture wars in America; not surprisingly, it provokes controversy and lively debate.
—– —– —–
This article from the San Francisco Believer further clarifies his views. It will be shown in installments, ’cause I don’t think it will be read otherwise.
—– —– —–
[SOCIAL AND MORAL PSYCHOLOGIST]
- Aversion to Suffering
- Reciprocity, Fairness, and Equality
- Hierarchy, Respect, and Duty
- Purity and Pollution
These are indignant times. Reading newspapers, talking to friends or coworkers, we seem often to live in a state of perpetual moral outrage. The targets of our indignation depend on the particular group, religion, and political party we are associated with. If the Terry Schiavo case does not convince you of this, take the issue of same-sex marriage. Conservatives are furious over the prospect of gays and lesbians marrying, and liberals are furious that conservatives are furious. But has anyone on either side subjected their views to serious scrutiny? What’s the response, for example, when conservatives are asked exactly why gays and lesbians shouldn’t be allowed to marry? “It threatens the institution of marriage.” OK. How? “Marriage is between a man and a woman.” (Democrats give this answer as well.) Right, but why? “It’s unnatural.” Isn’t that true of marriage in general? “Well… look… I mean… it’s just wrong!”
If you are familiar with the work of Jonathan Haidt, it will come as no surprise that our resentment, disgust, and outrage are rarely supported by fully developed arguments and deliberation. A psychologist at the University of Virginia, Haidt has devoted his career to the study of moral judgment and decision-making; his results are revealing and perhaps a bit unflattering. We tend to think of ourselves as arriving at our moral judgments after painstaking rational deliberation, or at least some kind of deliberation anyhow. According to Haidt’s model—which he calls “the social intuitionist model”—the process is just the reverse. We judge and then we reason. What, then, is the point of reasoning if the judgment has already been made? To convince other people (and also ourselves) that we’re right.
To support his model, Haidt has devised a number of ingenious experiments. He presents scenarios designed to evoke strong moral responses (“it’s wrong!”) but ones that are hard to justify rationally. (Examples include: having sex with a chicken carcass you’re about to eat, wiping your toilet with a national flag, and, as we’ll see, brother/sister incest.) Although the goals of these experiments vary, the results all point to the causal importance of emotions and intuitions in our moral life, and to different roles for reason from the ones we might expect or hope for. Haidt’s model has gone against some dominant trends in moral and social psychology, in particular the theories of well-known psychologists Piaget and Kohlberg, whose work appeared to support rationalist models of moral judgment (where reason plays the primary causal role in moral decision-making). But as Haidt himself notes, his own work can be placed within a grand tradition of psychology and philosophy—a return to an emphasis on the emotions which began in full force with the theories of the Scottish philosopher David Hume.
One last thing to say about Jon Haidt: he gives the best conference talk in the business. There are slides, great visuals, videos of fraternity guys trying to explain why sleeping with your sister is wrong, images of a toddler perturbed about not getting the same number of stickers as the child beside her (or, in one hilarious case, a three-year-old who is not perturbed at all), and plenty of sharp insights and jokes. The research he presents has implications for philosophy, anthropology, psychology, and even the culture wars in America; not surprisingly, it provokes controversy and lively debate. I interviewed Haidt after a conference at Dartmouth College.
I. REASON IS THE PRESS
SECRETARY OF THE EMOTIONS
THE BELIEVER: I want to start out talking about the phenomenon you call “moral dumbfounding.” You do an experiment where you present five scenarios to a subject and get their reaction. One of these scenarios describes a brother and sister Julie and Mark vacationing in the south of France. They have some wine, one thing leads to another, and they decide they want to have sex. They use two different kinds of contraception and enjoy it, but they decide not to do it again. How do people react to this, and what conclusions do you draw from their reaction?
JONATHAN HAIDT: People almost always start out by saying it’s wrong. Then they start to give reasons. The most common reasons involve genetic abnormalities or that it will somehow damage their relationship. But we say in the story that they use two forms of birth control, and we say in the story that they keep that night as a special secret and that it makes them even closer. So people seem to want to disregard certain facts about the story. When the experimenter points out these facts and says “Oh, well, sure, if they were going to have kids, that would cause problems, but they are using birth control, so would you say that it’s OK?” And people never say “Ooooh, right, I forgot about the birth control. So then it is OK.” Instead, they say, “Oh, yeah. Huh. Well, OK, let me think.”
So what’s really clear, you can see it in the videotapes of the experiment, is: people give a reason. When that reason is stripped from them, they give another reason. When the new reason is stripped from them, they reach for another reason. And it’s only when they reach deep into their pocket for another reason, and come up empty-handed, that they enter the state we call “moral dumbfounding.” Because they fully expect to find reasons. They’re surprised when they don’t find reasons. And so in some of the videotapes you can see, they start laughing. But it’s not an “it’s so funny” laugh. It’s more of a nervous-embarrassment puzzled laugh. So it’s a cognitive state where you “know” that something is morally wrong, but you can’t find reasons to justify your belief. Instead of changing your mind about what’s wrong, you just say: “I don’t know, I can’t explain it. I just know it’s wrong.” So the fact that this state exists indicates that people hold beliefs separate from, or with no need of support from, the justifications that they give. Or another way of saying it is that the knowing that something is wrong and the explaining why are completely separate processes.
BLVR: Are the subjects satisfied when they reach this state of moral dumbfounding? Or do they find something deeply problematic about it?
JH: For some people it’s problematic. They’re clearly puzzled, they’re clearly reaching, and they seem a little bit flustered. But other people are in a state that Scott Murphy, the honors student who conducted the experiment, calls “comfortably dumbfounded.” They say with full poise: “I don’t know; I can’t explain it; it’s just wrong.” Period. So we do know that there are big differences in people on a variable called “need for cognition.” Some people need to think about things, need to understand things, need to reason about things. Many of these people go to graduate school in philosophy. But most people, if they don’t have a reason for their moral judgments, they’re not particularly bothered.
BLVR: So your conclusion is that while we might think that Reason or reasons are playing a big causal role in how we arrive at moral judgments, it’s actually our intuitions—fueled by our emotions—that are doing most of the work. You say in your paper that reason is the press secretary of the emotions, the ex post facto spin doctor.
JH: Yes, that’s right.
BLVR: What do you mean by that, exactly?
JH: Reason is still a part of the process. It just doesn’t play the role that we think it does. We use reason, for example, to persuade someone to share our beliefs. There are different questions: there’s the psychological question of how you came by your beliefs. And then there’s the practical question of how you’re going to convince others to agree with you. Functionally, these two may have nothing to do with one another. If I believe that abortion is wrong, and I want to convince you that it’s wrong, there’s no reason I should recount to you my personal narrative of how I came to believe this. Rather, I should think up the best arguments I can come up with and give them to you. So I think the process is very much the same as what a press secretary does at a press conference. The press secretary might say that we need tax cuts because of the recession. Then, if a reporter points out to him that six months ago he said we needed tax cuts because of the surplus, can you imagine the press secretary saying: “Ohhhh, yeah, you’re right. Gosh, I guess that is contradictory.” And then can you imagine that contradiction changing the policy?
BLVR: I’m having a hard time doing that.
JH: Right. The president dispatches the press secretary, and the secretary’s job is basically to lie… to just make up a story. Should I take that back? No, I won’t take that back. The press secretary’s job is to be a lawyer. To argue for a position. And he doesn’t need to consult with the president about what the real reasons were for the instituting the policy. Those are irrelevant. He just needs to build the best case he can.
BLVR: You brought this up in your talk at Dartmouth, and I like the analogy. You said that when it comes to moral judgments, we think we’re scientists discovering the truth. But actually we’re lawyers arguing for positions we arrived at by other means. So, setting aside a few philosophy graduate students, do you think this is how our moral life works?
JH: For most people, most of the time, yes. There’s a question of the what you could call the ecological distribution of moral judgments. Now, by moral judgment I mean any time you have a sense that someone has done something good or bad. Think of how often you have that sense. If you live in a city and you drive, you probably have that sense many times a day. When I read the newspaper, I think unprintable thoughts, thoughts of anger. So I think moral judgment is ubiquitous. Not as ubiquitous as aesthetic judgments. As we walk around the world we see many beautiful and ugly things. But we don’t deliberate about them. We just see things as beautiful or ugly. My claim is that moral judgment is very much like aesthetic judgment. In fact, whenever I’m talking with philosophers who are trying to get me to clarify what I’m saying, if I ever feel confused, I just return to aesthetic judgment, and that saves me. I think whatever is true of aesthetic judgment is true of moral judgment, except that in our moral lives we do need to justify, whereas we don’t generally ask others for justifications of aesthetic judgments.
BLVR: So now where do these moral intuitions come from? I guess I’m looking to see if you think they’re a product of evolution.
JH: Yes, I do. We’re born into this world with a lot of guidance as to how to make our way. Our tongues come with various receptors that make us respond well to fruit and meat. Our bodies are designed to give us pleasure when we encounter fruit and meat. And to get displeasure from bitter sensations. So our bodies are designed to mesh with properties of the real world, the real physical world—to track nutrients and poisons.
Similarly, our minds come equipped to feel pleasure and displeasure at patterns in the social world. When we see someone cheat someone else, we feel displeasure, dislike. And this dislike is a signal to us to avoid that person, to avoid trusting that person, cooperating with him. When we see a heroic act, or an act of self-sacrifice, or charity, we feel an emotion that I call moral elevation. We feel a warm, very pleasurable feeling that includes elements of love. We’re much more likely to help such people, to trust them, and to want relationships with them. So just as our tongues guide us to good foods and away from bad foods, our minds guide us to good people, away from bad people.
BLVR: And to have these feelings was adaptive—they contributed to greater individual fitness—in the time we did most of our evolving?
JH: Yes. There are a couple of watersheds in human evolution. Most people are comfortable thinking about tool use and language use as watersheds. But the ability to play non-zero-sum games was another watershed. What set us apart from most or all of the other hominid species was our ultrasociality, our ability to be highly cooperative, even with strangers, people who are not at all related to us. Something about our minds enabled us to play this game. Individuals who could play it well succeeded and left more offspring. Individuals who could not form cooperative alliances, on average, died sooner and left fewer children. And so we are the descendants of the successful cooperators.
—The interview be continued tomorrow—
- The Evolution of Morality – Jonathan Haidt Describes the Science Behind Modern Politics (TrendHunter.com) (trendhunter.com)
- The Implications, Dangers, and Realities of Haidt’s theory of Social Intuitionism (a theory of why we believe what we believe) (fensel.net)
- On being wrong, cont’d: Liberals, conservatives, and moral imagination (bluejaysway.wordpress.com)
- Jonathan Haidt’s Moral-Political Psychology (psychologytoday.com)
- Stenosophic Liberalism. (socialpathology.blogspot.com)
- Review of The Emotional Dog and its Rational Tail (ethicalrealism.wordpress.com)
- Jonathan Haidt on the Moral Foundations of Occupy Wall Street (reason.com)
- What’s wrong with inequality? (clubtroppo.com.au)