In chapters 3 & 4, Haidt elaborates his basic dual process model of the mind, which he represents metaphorically as a (rational, conscious, deliberative) rider on an (intuitive, unconscious, automatized) elephant. This sort of dual process theory is in a fair way to becoming orthodoxy in contemporary psychology. (Though it’s not there yet. See this symposium in Perspectives on Psychological Science, kicked off by this target article by Keith Stanovich and Jonathan St. B. T. Evans. The best single account of the dual process theory that I know of is Daniel Kahneman’s Thinking, Fast and Slow.) In Haidt’s version, emotions are emphasized in the elephant, and the rider is treated as subordinate and even subservient to the elephant. Thus, his view has more than a whiff of Platonic dualism about it, with the twist that the Platonic charioteer can’t control his team of horses. At best, the charioteer urges and remonstrates with the team. For the most part, the charioteer’s role is to persuade others that the team is going the right way, whatever the appearances may be.
This adversarial view of the relationship between elephant and rider doesn’t sit particularly well with me, much less the treatment of reason as mere post hoc rationalization. The latter, unfortunately, is in certain ways an up-and-coming view. For example, here is a paper by Masicampo and Baumeister that argues that conscious thought does not directly control behavior at all! Rather, conscious thought is for communication, which is for social and cultural life. To be fair, neither Haidt nor Masicampo and Baumeister are saying that reason never plays any but social or rationalization functions. They acknowledge the role of reason in achieving an understanding of the physical world, for example, and in planning action. And in fact, this role is Masicampo and Baumeister’s emphasis. But Haidt’s focus is otherwise. He is interested in the role of reason in morally relevant judgment and behavior, and in this realm he is not a cognitivist. At least, not as a practical matter. Whatever the ultimate status of morality might turn out to be with respect to truth, as a matter of daily life morality for Haidt is a sociocultural phenomenon that exists because it performs certain sociocultural functions. It is no more true or false than marriage is true or false. Of course, we treat moral judgments as though they could be true or false. Probably we have to do so, or they couldn’t perform their function. But this is an illusion. Thus, moral reasoning is rationalization (in daily practice anyway), because there is nothing else for it to be. Typically, what passes for moral reasoning in everyday life is a tissue of fallacies, but even when it applies appropriately to legitimate moral principles, such reasoning does not appeal to moral truth (unless by happenstance), because morality did not socially evolve by discovering truth. (Rather, it evolved essentially by facilitating social cooperation.) The propositions that pass for legitimate moral principles are simply rationalizations that have been successful enough in the past to have achieved widespread, habitual acceptability.
As I’ve said in a comment earlier , I don’t take any of this to commit Haidt to noncognitivism in an ultimate sense. Just because moral judgment, norms, and behavior arose through processes of genetic and cultural evolution doesn’t mean there’s no such thing as objectively better or worse ways of living or that there is no truth about right and wrong that we can discover through some combination of evidence and rational argument. To suppose otherwise would be like thinking that because we have been evolutionarily programmed to prefer certain foods to others, to drink when we’re thirsty, to find certain substances disgusting, to find certain stimuli pleasurable and others painful, and so forth, that there is no truth about health. Of course, for all I know, Haidt might be a thoroughgoing noncognitivist. It’s just that, from what I’ve read so far, I see no reason for him to be committed to it (and I’ve just given what seems to me a pretty good reason to avoid such a commitment). I suspect that, being a psychologist and not a philosopher, he may be not much interested in the question either way.
In the comment I just mentioned, I emphasized the cultural evolution of moral judgment and behavior, rather than genetic evolution, because I have come to think it is the more important process. Haidt himself has so far emphasized genetic evolution much more than cultural. Again, I don’t see a conflict here, and I suspect that Haidt will start talking about cultural evolution more as the book progresses.
Finally, on Haidt’s tendency to portray the relationship between elephant and rider as adversarial, I think he reasons that the elephant is the product of many hundreds of millions of years of evolution, whereas the rider can be no more than 5 million years old and is probably much younger than that (see TRM, 53–54). No other animal has a rider with anything like the cognitive power of ours, much less a linguistically endowed rider, and yet other animals function perfectly well. If so, then the elephant must be capable of functioning perfectly well without (much of) a rider. Thus, the elephant is the true agent, the rider its servant, and if the rider should think to object to something the elephant does, tough.
I think this characterization is too extreme, in two ways. First, as I indicated, it’s not that other animals have no rider at all. If that were true, they would be unconscious, like one of David Chalmers’s zombies. But I take it that that is absurd. Equally absurd would be the allegation that the whole difference between controlled and automatic processing popped into existence for the first time with homo sapiens 2.5 million years ago. No, riders have been around in partnership with their elephants for a long time, thinking, recalling, problem solving, and controlling behavior. Language is new, yes, but that isn’t all there is to the rider. Second, the elephant is in certain ways programmable by the rider. This is the whole point of cognitive–behavioral therapy. It is true, of course, that the elephant has been programmed by millions of years of biological evolution to have certain innate reactions to certain stimuli—disgust at some things, lust for others, and so forth. It is also true, and at least as important, that the associative learning mechanism is always running at every waking moment, forging connections and prompting thoughts we can’t help having. Nevertheless, many of the ideas laid down in long term memory are supplied by the creative, reasoning, imaginative rider. (This is a central point of Masicampo and Baumeister’s article.) Moreover, contents of long term memory can be changed as the result of the rider’s conclusions, and these changes can result in new intuitions in the elephant. Haidt acknowledges this, of course, but he doesn’t emphasize it as much as he should, in my own opinion. Haidt’s cure for bad elephant behavior is not to retrain the elephant (through cognitive–behavioral therapy or otherwise) but to change the external, institutional environment (TRM, 106). For example, ask people to sign their expense reports at the beginning, promising to be honest, rather than at the end, claiming to have been honest. In this respect, Haidt’s view is akin to the “libertarian paternalism” of Cass Sunstein and Richard Thaler.
So much for general commentary. I have permitted myself to do this at the beginning of this discussion, rather than waiting until the end, because chapters 3 and 4 add nothing fundamental to the framework presented in chapters 1 and 2, which we have already discussed and understand pretty well, I think. The task of chapters 3 and 4 is to provide evidence from experimental psychology in support of the framework. In what remains of these comments, I shall describe and comment on some of this work, choosing the items that seem most interesting.
Chapter 3 concentrates on the primacy of the elephant; i.e., on the principle that intuitions come first, so that most of our vaunted moral reasoning is so much post hoc rationalization coming along behind, like the sweeper guy at the end of the parade in the credits of “Fractured Fairy Tales” cartoons. I will comment on four of the results Haidt cites as supporting his claims about the elephant. As we will see, I don’t find all of them equally persuasive.
First, he describes Wilhelm Wundt’s (semi-legendary German founder of experimental psychology) claim that all sense-perception includes an affective element with a positive or negative valence, so that all stimuli all the time are actively evaluated with respect to the basic biological question approve/disapprove, like/dislike, approach/avoid. Haidt doesn’t really describe any actual research, but he claims that Wundt’s thesis was revived and somehow validated in the 1980s by Robert Zajonc (a big-shot psychologist). Despite my tone, I don’t really wish to question this, as it makes eminent sense to me. This is just what the sort of organisms that survive to pass their genes on to the next generation should be expected to do. What it means, and what makes it interesting, is that the elephant is constantly evaluating. Affective responses aren’t limited to obviously emotional situations, like being confronted by a robber or attending a funeral or your child’s wedding. We affectively approve/disapprove of all stimuli.
Second, Haidt cites psychopathy as evidence that emotions are necessary for morality. Psychopaths lack moral emotions—they lack the capacity to empathize with or feel sorry for other people or to feel guilty or embarrassed for their misdeeds—and they also lack morals. Unfortunately, he doesn’t give the kind of quantitative evidence that would be needed to show a strong correlation. What about the psychopaths who do not go on crime sprees or commit other offences? Or does this not happen? If it doesn’t, how would we know? I think it’s in the nature of the case that the evidence here is vague and imprecise and based on clinical impressions and plays on people’s horror of the phenomenon of psychopathy. Since the possibility of moral psychopaths would cut strongly against Haidt’s thesis and cannot be ruled out a priori without begging the question, the evidence from psychopaths for his thesis is weak.
Third, Haidt cites research by Kiley Hamlin and colleagues demonstrating that infants as young as 4.5 months recognize helpful and harmful behavior in others toward third parties and prefer helpful agents to harmful. She published a review of her work recently here. Her basic paradigm is to show infants morality plays by means of puppet shows. For instance, one puppet tries repeatedly but unsuccessfully to climb a hill. There is a second, helpful puppet that pushes the first to the top. There is also a third, hindering puppet that knocks the first to the bottom. Infants, by reaching and other signs, show a remarkably strong preference for the helper puppet and dislike of the hinderer. An important aspect of the infants’ performance in these experiments is that it depends on recognizing intentionality in the puppets’ actions. Through various manipulations, Hamlin shows that merely helping the first puppet to succeed is not sufficient to produce the effect. If the puppets don’t understand what they are doing, the effect disappears, even if the first puppet is in fact helped by the second and hindered by the third. This sort of evidence implies that the perception of intentionality in others is probably innate in humans, as is a preference for third party helping and dislike of third party hindering, even in observers who have no selfish interest in the outcome.
Fourth, Haidt mentions a “now famous study” published in Science by Josh Greene and some colleagues while Greene was still only a grad student (in philosophy) at Princeton. I think it’s right to say that the paper is famous—I’ve read of it elsewhere, and it has nearly 3000 citations—but it’s hard to see why. Greene’s procedure was to present trolley-problem-type scenarios to participants while scanning their brain activity in an fMRI machine. There were three types: (a) scenarios like the trolley problem, where the suggested action is relatively impersonal (pull a switch); (b) scenarios like the footbridge variation on the trolley problem, where the suggested action is relatively personal and emotional (push a fat stranger in front of the oncoming trolley); (c) nonmoral control scenarios (decide which of two coupons to use at a store). They found that brain areas associated with emotional processing were significantly more active in the second condition (moral-personal) than in the other two (moral-impersonal and nonmoral). Also, areas associated with working memory, which have been shown to be less active during emotional processing, were indeed less active in the second condition and more active in the other two. Haidt says that strength of emotion predicted moral judgment in this study, implying that people who had stronger emotional reactions to the moral-personal scenarios were more likely to disapprove the suggested action (TRM, 77). However, that is not what the study report says. Rather, participants in general engaged in more emotional processing when considering the moral-personal scenarios, regardless of their decisions. (The other two conditions, moral-impersonal and nonmoral, showed similar patterns of brain activity to each other.) Where the decision did make a difference was in reaction time. Participants who approved the suggested action in the moral-personal scenarios (throw the fat guy under the trolley) took nearly two seconds longer on average to decide this than participants who disapproved. There were no statistically significant differences in reaction times for different decisions in the other two conditions. The authors interpret this to mean that participants who approved the action in the moral-personal case overcame their emotional abhorrence of the suggested action in order to answer in accordance with logical utilitarian principle, and this took extra time. (An annoying aspect of this study is the authors’ evident assumption that throwing the switch—and therefore also pushing the fat guy—is simply the right answer to the trolley problem, on the grounds that “nearly everyone manages to conclude” (p. 2106) this in the unemotional, impersonal case, where logic and common sense are apparently free to prevail.) Haidt (and Greene also) takes the results to show that philosophers who wouldn’t throw the switch in the trolley problem are answering with their emotional elephant, and all their high-minded talk of rights or other deontological principles is just so much self-delusional rationalization of their feelings. But I ask you, setting all theory aside, is there anything the least remarkable or even interesting about these results? News flash!: People who are invited to commit grisly murder in a good cause feel stronger emotions than those who aren’t and take longer to decide to do it than to decide not to. Amazing as this news is, its theoretical implications for moral psychology approach zero.
Chapter 4 concentrates on the rider. Its theme is that Plato’s charioteer-should-drive-the-chariot psychology is wrong: “reason is not fit to rule; it was designed to seek justification, not truth” (TRM, 86). Again, Haidt presents a series of lines of evidence in support of his thesis, and again I will describe four that seem particularly interesting or noteworthy.
First, he describes research intended to show how sensitive we are to others’ bad opinion of us. Participants sat alone in a room describing themselves into a microphone for five minutes. On a screen in front of them, numbers would flash as they spoke. They were told that the numbers represented the current rating (ranging from a high of 7 to a low of 1) by a second participant, who was listening, of how much the second participant would like to interact with them in the next phase of the study. In reality, of course, the ratings were faked by the experimenter. Imagine that you are in this study. As you talk, the numbers are going 6…5…4…3… I think we don’t need a statistical analysis to know that this would feel terrible. And that is the point of the study. Talk of not caring what other people think of us is bluff. The truth is that we care very much whether other people like and approve of us. Social disapproval is a very powerful stick. I doubt anyone—except maybe psychopaths—is immune from this.
Second, we care what we think of us, too. Research shows that people who are left unobserved and therefore free to lie and cheat do lie and cheat, but only up to a point. In the study Haidt is referring to, participants answered fifty multiple choice questions, like “What is the world’s longest river?”, marking their answers on a test form, which they then transferred to a Scantron form and handed in to the experimenter. The experimenter scanned the form and handed each participant ten cents for every correct answer. That was the basic, control condition. There were three other, experimental conditions. In the first, the correct answer was shown in gray on the Scantron form, so the participants knew the correct answers when they transferred their own answers to the form. In the second, not only were the correct answers shown on the Scantron form, but participants were instructed to shred their test forms before handing in the Scantron from to the experimenter. In the third, the correct answers were shown, the test form was shredded, the Scantron form was shredded, and the participant himself took however much money he wanted (knowing it was supposed to be ten cents per correct answer)! Now, you might expect a little cheating to go on between the control condition and the first experimental condition, and that’s just what happened. Showing the correct answers on the Scantron form magically improved performance from an average of 32.6 correct answers out of 50 to an average of 36.2. But if you are expecting the cheating to become more egregious as the opportunity grows, you will be disappointed. The results for the two remaining conditions were 35.9 and 36.1, no different from the first experimental condition. Moreover, these averages are not the result of a few bad apples cheating outrageously. Rather, they are the result of most people cheating, but cheating only a little. The implication is that people are no more dishonest than they can justify to themselves. To cheat, you have to be able to kid yourself that you aren’t “really” cheating. You have to be able to say something like, “Oh, I really knew that one,” when changing an incorrect answer on the Scantron form. The moral Haidt draws from this experiment is that we are very good at telling self-serving lies, and this is just what we do when we have the opportunity. The rider acts, within the limits of its ability, to give the elephant what it wants.
Third, we are not very good, in most circumstances, at investigating and forming judgments concerning matters of fact. Even in matters about which we have no vested interest or personal stake, we tend to settle quickly on a hypothesis and seek to justify it to the exclusion of alternatives. Moreover, we tend to have easily satisfied standards of “proof.” Usually a single piece of evidence will do. It is very much as if, having quickly decided that H must be true, we cast around for a reason that supports it. If we find one—and it is rare that there isn’t something to be said for a given hypothesis—we stop thinking! We have a justification and are entitled to believe. The same goes for denial. Having decided that H must be false, we look around for a reason that undercuts it—and again, it will be seldom that we are unable to find one. Having found it, we can stop thinking and generally do. The educational psychologist David Perkins calls this, amusingly, the “makes-sense epistemology.” I leave it to the reader to judge whether this description does not come uncomfortably close to his own thought processes too much of the time. In one demonstration of this basic point, Perkins asked participants to make an initial judgment concerning some fairly tame social issue, such as whether giving schools more money would improve teaching and learning. Participants were then asked to write down all the reasons they could come up with on either side of the issue. Reasons were scored as “my-side” or “other-side” depending on whether they supported or opposed a participant’s initial judgment. Participants generated far more my-side arguments than other-side. Also, importantly, although IQ was by far the best predictor of people’s ability to generate relevant arguments, it predicted only the number of my-side arguments. Smarter people are no more likely to be fair-minded or thorough investigators of a question than the less smart. They are more effective advocates (for their elephants), not more rational thinkers. (For much more on the lack of correlation between intelligence and rationality, see the work of Keith Stanovich, for instance this book and this book.)
Fourth, Haidt points out that many of the “flaws” and “biases” in human cognition start to make sense if human cognition is reinterpreted as an advocate for the elephant instead of as a seeker of truth. For instance, if you think about confirmation bias, our capacity for producing believable lies in the service of what we want, the makes-sense epistemology—these are just how reason ought to perform if its role is to act as a lawyer for its elephant. Researchers have traditionally looked at these phenomena as failures of reason. But if the evolutionary function of reason is to be a lawyer for the elephant, not a scientist in search of truth, then these phenomena are not failures! They aren’t bugs, they’re features! In Haidt’s view, this is the truth about reason. Therefore, we cannot expect people as individuals to ever be very good reasoners. For reason to produce truth, it needs the discipline of civil, collegial opposition from other reasoners. Successful reasoning is largely a social phenomenon. This is also why ideological diversity in academia is so important, and why the overwhelmingly left-wing composition of social science and humanities departments is such a bad thing.
Finally, one last point. (I mean, I’m way over the word limit anyway, so what the hell.) Haidt mentions philosopher Eric Schwitzgebel’s amusing research program in which he empirically investigates the moral behavior of moral philosophers (often with collaborator Joshua Rust). It is summarized here. He finds that ethics books are 1.5 times more likely to be missing from major academic libraries than other philosophy books; that ethicists do not vote any more frequently than other philosophers or than other academics; that ethicists listening to conference presentations are no less likely than other philosophers to talk audibly during the presentation, or to slam the door when leaving before the presentation is over, or to leave behind cups and other trash in the conference room, or to avoid paying conference registration fees; that ethicists are no more likely to reply to undergraduates’ emails than other philosophy or non-philosophy professors; that ethicists are no more likely to phone their mothers; that ethicists are no more likely to check the “organ donor” box on their driver’s license; and so forth and so on. Haidt’s point in bringing this up, of course, is that if morals were determined by reason, then moral philosophers ought to behave a good deal more morally than other people. But they don’t.
And for fitting musical accompaniment while contemplating these behavioral facts about moral philosophers, you can listen to Nomy Arpaly’s rendition of “It Ain’t Necessarily So.”