I love the work of developmental psychologist Paul Bloom. He and I have similar interests, similar ways of thinking, and even similar birthdays. When it appears that we disagree, as in the debate that first brought us into contact (over my 2001 “Emotional Dog” article and his response), it usually turns out that we agree on the basic story and differ only on the lessons we draw from that story. So I was quite surprised to read Paul’s prediction, in Nature, that my Social Intuitionist Model (on which I thought we largely agreed) will be proven wrong. Here’s the central passage:
All this leaves little room for rational deliberation in shaping our moral outlook. Indeed, many psychologists think that the reasoned arguments we make about why we have certain beliefs are mostly post-hoc justifications for gut reactions. As the social psychologist Jonathan Haidt puts it, although we like to think of ourselves as judges, reasoning through cases according to deeply held principles, in reality we are more like lawyers, making arguments for positions that have already been established. This implies we have little conscious control over our sense of right and wrong. I predict that this theory of morality will be proved wrong in its wholesale rejection of reason. Emotional responses alone cannot explain one of the most interesting aspects of human nature: that morals evolve.
I have underlined the two main mistakes Bloom makes in setting up his difference with me. I’d like to correct them here
1) The “wholesale rejection of reason”
I have never engaged in a “wholesale rejection of reason.” Four of the six links in the Social Intuitionist model are kinds of reasoning. I merely rejected the worship of reasoning common in the Kohlbergian tradition. I took seriously the research on motivated reasoning and the confirmation bias, which show that, just as David Hume said, reasoning is extremely effective as a servant, but rather ineffective as a tool for discovering the truth, at least when carried out by individuals. (A forthcoming review article on the “argumentative function of reasoning” will make this case far better than I did.) But I have always made it clear that reasoning is carried out for a purpose, and that purpose is to influence other people, particularly by triggering moral intuitions in them. For example:
The discussion thus far may have given the impression that the model dismisses reasoning as post-hoc rationalization (link 2). However it must be stressed that four of the six links in the model are reasoning links, and three of these links (3, 5, and 6) are hypothesized to have real causal effects on moral judgment. Link 3, the reasoned persuasion link, says that people’s (ex post facto) moral reasoning can have a causal effect – on other people’s intuitions. In the social intuitionist view moral judgment is not just a single act that occurs in a single person’s mind. It is an ongoing process, often spread out over time and over multiple people. Reasons and arguments can circulate and affect people, even if individuals rarely engage in private moral reasoning for themselves. (Haidt, 2001, p. 828)
2) “Emotional response alone cannot explain…”
I fully agree with Bloom that emotions cannot be the whole story of morality. That’s why I shifted from talking about emotions in the 1990s to talking about intuitions, which are clearly a form of cognition, in the 2000s. It’s true that I used the phrase “the emotional dog,” which I thought sounded better than “the intuitive dog.” But I stated clearly in that paper, and many others, that the key terms in the debate are “intuition” and “reasoning,” which are both forms of cognition:
It must be stressed that the contrast of intuition and reasoning is not the contrast of emotion and cognition. Intuition, reasoning, and the appraisals contained in emotions (Frijda, 1986; Lazarus, 1991) are all forms of cognition. Rather, the words “intuition” and “reasoning” are intended to capture the contrast made by dozens of philosophers and psychologists between two kinds of cognition. The most important distinctions (see Table 1) are that intuition occurs quickly, effortlessly, and automatically, such that the outcome but not the process is accessible to consciousness, while reasoning occurs more slowly, requires some effort, and involves at least some steps that are accessible to consciousness. (Haidt, 2001, p. 815)
Of course, even intuitions, which are much more common and flexible than emotions, are not the whole story in the SIM. They’re just most of the story. They’re where the action is. So if you want to produce social change, you’ve got to change intuitions.
Oddly, Bloom uses narrative and story-telling as examples of the sorts of processes that are powerful in producing moral change. But these are exactly the sorts of processes that the SIM emphasizes, because stories and narrative are so effective at triggering intuitions. In order to contradict the SIM he’d need to show that logical reasoning (which does not appeal to emotions and intuitions) is often the cause of social change. Instead he states that “Humans are natural story tellers, and use narrative to influence others, particularly their own children.” But narrative is a crucial part of SIM, which Craig Joseph and I have linked to virtue ethics:
Moral education, on our account, is a matter of linking up the innate intuitions and virtues already learned with a skill that one wants to encourage. Parents and educators should therefore recognize the limits of the ‘direct route’ to moral education. It is helpful to espouse rules and principles, but only as an adjunct to more indirect approaches, which include immersing children in environments that are rich in stories and examples that adults interpret with emotion. Those stories and examples should trigger the innate moral modules, if possible, and link them to broader virtues and principles. (Haidt & Joseph, 2004, p. 65)
It seems, therefore, that once again, Paul Bloom and I basically agree on the outlines of moral psychology. We also agree that moral change is possible, particularly when gifted orators and story tellers — such as Martin Luther King Jr. — can create a new narrative and trigger supporting intuitions. In fact, I concluded my 2007 Science article with this paragraph, which would have been right at home in Bloom’s Nature piece:
Yet even though morality is partly a game of self-promotion, people do sincerely want peace, decency, and cooperation to prevail within their groups. And because morality may be as much a product of cultural evolution as genetic evolution, it can change substantially in a generation or two. For example, as technological advances make us more aware of the fate of people in faraway lands, our concerns expand and we increasingly want peace, decency, and cooperation to prevail in other groups, and in the human group as well. (Haidt, 2007, p. 1001)
So please. go out and buy Bloom’s fascinating new book, How Pleasure Works. Read his wonderful work on how the harm foundation is already operational in 6-month-old babies. But don’t make the mistake that he made, and that so many of my critics make, in assuming that I’m an emotivist who denies either the existence of reasoning or its importance for social change. We reason often, in order to persuade people. We just don’t do it often, or well, as part of our own private search for moral truth.
[To read Paul Bloom's response, click here]
P.S., Here are three additional quotes that I hope potential critics will read before accusing me of siding with “emotion” versus “cognition,” or claiming that social change is impossible because we are “prisoners of our emotions.”
Link 5, the reasoned judgment link, recognizes that a person could, in principle, simply reason her way to a judgment that contradicts her initial intuition. The literature on everyday reasoning (Kuhn, 1991) suggests that such an ability may be common only among philosophers, who have been extensively trained and socialized to follow reasoning even to very disturbing conclusions (as in the case of Socrates, or the more recent works of Derek Parfit and Peter Singer). Yet the fact that there are at least a few people among us who can reach such conclusions on their own, and then argue for them eloquently (link 3), means that pure moral reasoning can play a causal role in the moral life of a society. (Haidt, 2001, p. 829)
The social intuitionist model also offers more general advice for improving moral judgment. If the principal difficulty in objective moral reasoning is the biased search for evidence (Kunda, 1990; Perkins, Farady, & Bushey, 1991), then people should take advantage of the social persuasion link (link 4) and get other people to help them improve their reasoning. By seeking out discourse partners who are respected for their wisdom and openmindedness, and by talking about the evidence, justifications, and mitigating factors involved in a potential moral violation, people can help trigger a variety of conflicting intuitions in each other. If more conflicting intuitions are triggered, the final judgment is likely to be more nuanced and ultimately more reasonable. The social intuitionist model, therefore, is not an anti-rationalist model. It is a model about the complex and dynamic ways that intuition, reasoning, and social influences interact to produce moral judgment. (Haidt, 2001, p. 829)
Affective reactions push, but they do not absolutely force. We can all think of times when we deliberated about a decision and went against our first (often selfish) impulse, or when we changed our minds about a person…. There are at least three ways we can override our immediate intuitive responses. We can use conscious verbal reasoning, such as considering the costs and benefits of each course of action. We can reframe a situation and see a new angle or consequence, thereby triggering a second flash of intuition that may compete with the first. And we can talk with people who raise new arguments, which then trigger in us new flashes of intuition followed by various kinds of reasoning. The social intuitionist model includes separate paths for each of these three ways of changing one’s mind, but it says that the first two paths are rarely used, and that most moral change happens as a result of social interaction. Other people often influence us, in part by presenting the counterevidence we rarely seek out ourselves. (Haidt, 2007 p. 999)
[To read Paul Bloom's response, click here]