In chapters 3 & 4, Haidt elaborates his basic dual process model of the mind, which he represents metaphorically as a (rational, conscious, deliberative) rider on an (intuitive, unconscious, automatized) elephant. This sort of dual process theory is in a fair way to becoming orthodoxy in contemporary psychology. (Though it’s not there yet. See this symposium in Perspectives on Psychological Science, kicked off by this target article by Keith Stanovich and Jonathan St. B. T. Evans. The best single account of the dual process theory that I know of is Daniel Kahneman’s Thinking, Fast and Slow.) In Haidt’s version, emotions are emphasized in the elephant, and the rider is treated as subordinate and even subservient to the elephant. Thus, his view has more than a whiff of Platonic dualism about it, with the twist that the Platonic charioteer can’t control his team of horses. At best, the charioteer urges and remonstrates with the team. For the most part, the charioteer’s role is to persuade others that the team is going the right way, whatever the appearances may be.
This adversarial view of the relationship between elephant and rider doesn’t sit particularly well with me, much less the treatment of reason as mere post hoc rationalization. The latter, unfortunately, is in certain ways an up-and-coming view. For example, here is a paper by Masicampo and Baumeister that argues that conscious thought does not directly control behavior at all! Rather, conscious thought is for communication, which is for social and cultural life. To be fair, neither Haidt nor Masicampo and Baumeister are saying that reason never plays any but social or rationalization functions. They acknowledge the role of reason in achieving an understanding of the physical world, for example, and in planning action. And in fact, this role is Masicampo and Baumeister’s emphasis. But Haidt’s focus is otherwise. He is interested in the role of reason in morally relevant judgment and behavior, and in this realm he is not a cognitivist. At least, not as a practical matter. Whatever the ultimate status of morality might turn out to be with respect to truth, as a matter of daily life morality for Haidt is a sociocultural phenomenon that exists because it performs certain sociocultural functions. It is no more true or false than marriage is true or false. Of course, we treat moral judgments as though they could be true or false. Probably we have to do so, or they couldn’t perform their function. But this is an illusion. Thus, moral reasoning is rationalization (in daily practice anyway), because there is nothing else for it to be. Typically, what passes for moral reasoning in everyday life is a tissue of fallacies, but even when it applies appropriately to legitimate moral principles, such reasoning does not appeal to moral truth (unless by happenstance), because morality did not socially evolve by discovering truth. (Rather, it evolved essentially by facilitating social cooperation.) The propositions that pass for legitimate moral principles are simply rationalizations that have been successful enough in the past to have achieved widespread, habitual acceptability.
As I’ve said in a comment earlier , I don’t take any of this to commit Haidt to noncognitivism in an ultimate sense. Just because moral judgment, norms, and behavior arose through processes of genetic and cultural evolution doesn’t mean there’s no such thing as objectively better or worse ways of living or that there is no truth about right and wrong that we can discover through some combination of evidence and rational argument. To suppose otherwise would be like thinking that because we have been evolutionarily programmed to prefer certain foods to others, to drink when we’re thirsty, to find certain substances disgusting, to find certain stimuli pleasurable and others painful, and so forth, that there is no truth about health. Of course, for all I know, Haidt might be a thoroughgoing noncognitivist. It’s just that, from what I’ve read so far, I see no reason for him to be committed to it (and I’ve just given what seems to me a pretty good reason to avoid such a commitment). I suspect that, being a psychologist and not a philosopher, he may be not much interested in the question either way.
In the comment I just mentioned, I emphasized the cultural evolution of moral judgment and behavior, rather than genetic evolution, because I have come to think it is the more important process. Haidt himself has so far emphasized genetic evolution much more than cultural. Again, I don’t see a conflict here, and I suspect that Haidt will start talking about cultural evolution more as the book progresses.
Finally, on Haidt’s tendency to portray the relationship between elephant and rider as adversarial, I think he reasons that the elephant is the product of many hundreds of millions of years of evolution, whereas the rider can be no more than 5 million years old and is probably much younger than that (see TRM, 53–54). No other animal has a rider with anything like the cognitive power of ours, much less a linguistically endowed rider, and yet other animals function perfectly well. If so, then the elephant must be capable of functioning perfectly well without (much of) a rider. Thus, the elephant is the true agent, the rider its servant, and if the rider should think to object to something the elephant does, tough.
I think this characterization is too extreme, in two ways. First, as I indicated, it’s not that other animals have no rider at all. If that were true, they would be unconscious, like one of David Chalmers’s zombies. But I take it that that is absurd. Equally absurd would be the allegation that the whole difference between controlled and automatic processing popped into existence for the first time with homo sapiens 2.5 million years ago. No, riders have been around in partnership with their elephants for a long time, thinking, recalling, problem solving, and controlling behavior. Language is new, yes, but that isn’t all there is to the rider. Second, the elephant is in certain ways programmable by the rider. This is the whole point of cognitive–behavioral therapy. It is true, of course, that the elephant has been programmed by millions of years of biological evolution to have certain innate reactions to certain stimuli—disgust at some things, lust for others, and so forth. It is also true, and at least as important, that the associative learning mechanism is always running at every waking moment, forging connections and prompting thoughts we can’t help having. Nevertheless, many of the ideas laid down in long term memory are supplied by the creative, reasoning, imaginative rider. (This is a central point of Masicampo and Baumeister’s article.) Moreover, contents of long term memory can be changed as the result of the rider’s conclusions, and these changes can result in new intuitions in the elephant. Haidt acknowledges this, of course, but he doesn’t emphasize it as much as he should, in my own opinion. Haidt’s cure for bad elephant behavior is not to retrain the elephant (through cognitive–behavioral therapy or otherwise) but to change the external, institutional environment (TRM, 106). For example, ask people to sign their expense reports at the beginning, promising to be honest, rather than at the end, claiming to have been honest. In this respect, Haidt’s view is akin to the “libertarian paternalism” of Cass Sunstein and Richard Thaler.
So much for general commentary. I have permitted myself to do this at the beginning of this discussion, rather than waiting until the end, because chapters 3 and 4 add nothing fundamental to the framework presented in chapters 1 and 2, which we have already discussed and understand pretty well, I think. The task of chapters 3 and 4 is to provide evidence from experimental psychology in support of the framework. In what remains of these comments, I shall describe and comment on some of this work, choosing the items that seem most interesting.
Chapter 3 concentrates on the primacy of the elephant; i.e., on the principle that intuitions come first, so that most of our vaunted moral reasoning is so much post hoc rationalization coming along behind, like the sweeper guy at the end of the parade in the credits of “Fractured Fairy Tales” cartoons. I will comment on four of the results Haidt cites as supporting his claims about the elephant. As we will see, I don’t find all of them equally persuasive.
First, he describes Wilhelm Wundt’s (semi-legendary German founder of experimental psychology) claim that all sense-perception includes an affective element with a positive or negative valence, so that all stimuli all the time are actively evaluated with respect to the basic biological question approve/disapprove, like/dislike, approach/avoid. Haidt doesn’t really describe any actual research, but he claims that Wundt’s thesis was revived and somehow validated in the 1980s by Robert Zajonc (a big-shot psychologist). Despite my tone, I don’t really wish to question this, as it makes eminent sense to me. This is just what the sort of organisms that survive to pass their genes on to the next generation should be expected to do. What it means, and what makes it interesting, is that the elephant is constantly evaluating. Affective responses aren’t limited to obviously emotional situations, like being confronted by a robber or attending a funeral or your child’s wedding. We affectively approve/disapprove of all stimuli.
Second, Haidt cites psychopathy as evidence that emotions are necessary for morality. Psychopaths lack moral emotions—they lack the capacity to empathize with or feel sorry for other people or to feel guilty or embarrassed for their misdeeds—and they also lack morals. Unfortunately, he doesn’t give the kind of quantitative evidence that would be needed to show a strong correlation. What about the psychopaths who do not go on crime sprees or commit other offences? Or does this not happen? If it doesn’t, how would we know? I think it’s in the nature of the case that the evidence here is vague and imprecise and based on clinical impressions and plays on people’s horror of the phenomenon of psychopathy. Since the possibility of moral psychopaths would cut strongly against Haidt’s thesis and cannot be ruled out a priori without begging the question, the evidence from psychopaths for his thesis is weak.
Third, Haidt cites research by Kiley Hamlin and colleagues demonstrating that infants as young as 4.5 months recognize helpful and harmful behavior in others toward third parties and prefer helpful agents to harmful. She published a review of her work recently here. Her basic paradigm is to show infants morality plays by means of puppet shows. For instance, one puppet tries repeatedly but unsuccessfully to climb a hill. There is a second, helpful puppet that pushes the first to the top. There is also a third, hindering puppet that knocks the first to the bottom. Infants, by reaching and other signs, show a remarkably strong preference for the helper puppet and dislike of the hinderer. An important aspect of the infants’ performance in these experiments is that it depends on recognizing intentionality in the puppets’ actions. Through various manipulations, Hamlin shows that merely helping the first puppet to succeed is not sufficient to produce the effect. If the puppets don’t understand what they are doing, the effect disappears, even if the first puppet is in fact helped by the second and hindered by the third. This sort of evidence implies that the perception of intentionality in others is probably innate in humans, as is a preference for third party helping and dislike of third party hindering, even in observers who have no selfish interest in the outcome.
Fourth, Haidt mentions a “now famous study” published in Science by Josh Greene and some colleagues while Greene was still only a grad student (in philosophy) at Princeton. I think it’s right to say that the paper is famous—I’ve read of it elsewhere, and it has nearly 3000 citations—but it’s hard to see why. Greene’s procedure was to present trolley-problem-type scenarios to participants while scanning their brain activity in an fMRI machine. There were three types: (a) scenarios like the trolley problem, where the suggested action is relatively impersonal (pull a switch); (b) scenarios like the footbridge variation on the trolley problem, where the suggested action is relatively personal and emotional (push a fat stranger in front of the oncoming trolley); (c) nonmoral control scenarios (decide which of two coupons to use at a store). They found that brain areas associated with emotional processing were significantly more active in the second condition (moral-personal) than in the other two (moral-impersonal and nonmoral). Also, areas associated with working memory, which have been shown to be less active during emotional processing, were indeed less active in the second condition and more active in the other two. Haidt says that strength of emotion predicted moral judgment in this study, implying that people who had stronger emotional reactions to the moral-personal scenarios were more likely to disapprove the suggested action (TRM, 77). However, that is not what the study report says. Rather, participants in general engaged in more emotional processing when considering the moral-personal scenarios, regardless of their decisions. (The other two conditions, moral-impersonal and nonmoral, showed similar patterns of brain activity to each other.) Where the decision did make a difference was in reaction time. Participants who approved the suggested action in the moral-personal scenarios (throw the fat guy under the trolley) took nearly two seconds longer on average to decide this than participants who disapproved. There were no statistically significant differences in reaction times for different decisions in the other two conditions. The authors interpret this to mean that participants who approved the action in the moral-personal case overcame their emotional abhorrence of the suggested action in order to answer in accordance with logical utilitarian principle, and this took extra time. (An annoying aspect of this study is the authors’ evident assumption that throwing the switch—and therefore also pushing the fat guy—is simply the right answer to the trolley problem, on the grounds that “nearly everyone manages to conclude” (p. 2106) this in the unemotional, impersonal case, where logic and common sense are apparently free to prevail.) Haidt (and Greene also) takes the results to show that philosophers who wouldn’t throw the switch in the trolley problem are answering with their emotional elephant, and all their high-minded talk of rights or other deontological principles is just so much self-delusional rationalization of their feelings. But I ask you, setting all theory aside, is there anything the least remarkable or even interesting about these results? News flash!: People who are invited to commit grisly murder in a good cause feel stronger emotions than those who aren’t and take longer to decide to do it than to decide not to. Amazing as this news is, its theoretical implications for moral psychology approach zero.
Chapter 4 concentrates on the rider. Its theme is that Plato’s charioteer-should-drive-the-chariot psychology is wrong: “reason is not fit to rule; it was designed to seek justification, not truth” (TRM, 86). Again, Haidt presents a series of lines of evidence in support of his thesis, and again I will describe four that seem particularly interesting or noteworthy.
First, he describes research intended to show how sensitive we are to others’ bad opinion of us. Participants sat alone in a room describing themselves into a microphone for five minutes. On a screen in front of them, numbers would flash as they spoke. They were told that the numbers represented the current rating (ranging from a high of 7 to a low of 1) by a second participant, who was listening, of how much the second participant would like to interact with them in the next phase of the study. In reality, of course, the ratings were faked by the experimenter. Imagine that you are in this study. As you talk, the numbers are going 6…5…4…3… I think we don’t need a statistical analysis to know that this would feel terrible. And that is the point of the study. Talk of not caring what other people think of us is bluff. The truth is that we care very much whether other people like and approve of us. Social disapproval is a very powerful stick. I doubt anyone—except maybe psychopaths—is immune from this.
Second, we care what we think of us, too. Research shows that people who are left unobserved and therefore free to lie and cheat do lie and cheat, but only up to a point. In the study Haidt is referring to, participants answered fifty multiple choice questions, like “What is the world’s longest river?”, marking their answers on a test form, which they then transferred to a Scantron form and handed in to the experimenter. The experimenter scanned the form and handed each participant ten cents for every correct answer. That was the basic, control condition. There were three other, experimental conditions. In the first, the correct answer was shown in gray on the Scantron form, so the participants knew the correct answers when they transferred their own answers to the form. In the second, not only were the correct answers shown on the Scantron form, but participants were instructed to shred their test forms before handing in the Scantron from to the experimenter. In the third, the correct answers were shown, the test form was shredded, the Scantron form was shredded, and the participant himself took however much money he wanted (knowing it was supposed to be ten cents per correct answer)! Now, you might expect a little cheating to go on between the control condition and the first experimental condition, and that’s just what happened. Showing the correct answers on the Scantron form magically improved performance from an average of 32.6 correct answers out of 50 to an average of 36.2. But if you are expecting the cheating to become more egregious as the opportunity grows, you will be disappointed. The results for the two remaining conditions were 35.9 and 36.1, no different from the first experimental condition. Moreover, these averages are not the result of a few bad apples cheating outrageously. Rather, they are the result of most people cheating, but cheating only a little. The implication is that people are no more dishonest than they can justify to themselves. To cheat, you have to be able to kid yourself that you aren’t “really” cheating. You have to be able to say something like, “Oh, I really knew that one,” when changing an incorrect answer on the Scantron form. The moral Haidt draws from this experiment is that we are very good at telling self-serving lies, and this is just what we do when we have the opportunity. The rider acts, within the limits of its ability, to give the elephant what it wants.
Third, we are not very good, in most circumstances, at investigating and forming judgments concerning matters of fact. Even in matters about which we have no vested interest or personal stake, we tend to settle quickly on a hypothesis and seek to justify it to the exclusion of alternatives. Moreover, we tend to have easily satisfied standards of “proof.” Usually a single piece of evidence will do. It is very much as if, having quickly decided that H must be true, we cast around for a reason that supports it. If we find one—and it is rare that there isn’t something to be said for a given hypothesis—we stop thinking! We have a justification and are entitled to believe. The same goes for denial. Having decided that H must be false, we look around for a reason that undercuts it—and again, it will be seldom that we are unable to find one. Having found it, we can stop thinking and generally do. The educational psychologist David Perkins calls this, amusingly, the “makes-sense epistemology.” I leave it to the reader to judge whether this description does not come uncomfortably close to his own thought processes too much of the time. In one demonstration of this basic point, Perkins asked participants to make an initial judgment concerning some fairly tame social issue, such as whether giving schools more money would improve teaching and learning. Participants were then asked to write down all the reasons they could come up with on either side of the issue. Reasons were scored as “my-side” or “other-side” depending on whether they supported or opposed a participant’s initial judgment. Participants generated far more my-side arguments than other-side. Also, importantly, although IQ was by far the best predictor of people’s ability to generate relevant arguments, it predicted only the number of my-side arguments. Smarter people are no more likely to be fair-minded or thorough investigators of a question than the less smart. They are more effective advocates (for their elephants), not more rational thinkers. (For much more on the lack of correlation between intelligence and rationality, see the work of Keith Stanovich, for instance this book and this book.)
Fourth, Haidt points out that many of the “flaws” and “biases” in human cognition start to make sense if human cognition is reinterpreted as an advocate for the elephant instead of as a seeker of truth. For instance, if you think about confirmation bias, our capacity for producing believable lies in the service of what we want, the makes-sense epistemology—these are just how reason ought to perform if its role is to act as a lawyer for its elephant. Researchers have traditionally looked at these phenomena as failures of reason. But if the evolutionary function of reason is to be a lawyer for the elephant, not a scientist in search of truth, then these phenomena are not failures! They aren’t bugs, they’re features! In Haidt’s view, this is the truth about reason. Therefore, we cannot expect people as individuals to ever be very good reasoners. For reason to produce truth, it needs the discipline of civil, collegial opposition from other reasoners. Successful reasoning is largely a social phenomenon. This is also why ideological diversity in academia is so important, and why the overwhelmingly left-wing composition of social science and humanities departments is such a bad thing.
Finally, one last point. (I mean, I’m way over the word limit anyway, so what the hell.) Haidt mentions philosopher Eric Schwitzgebel’s amusing research program in which he empirically investigates the moral behavior of moral philosophers (often with collaborator Joshua Rust). It is summarized here. He finds that ethics books are 1.5 times more likely to be missing from major academic libraries than other philosophy books; that ethicists do not vote any more frequently than other philosophers or than other academics; that ethicists listening to conference presentations are no less likely than other philosophers to talk audibly during the presentation, or to slam the door when leaving before the presentation is over, or to leave behind cups and other trash in the conference room, or to avoid paying conference registration fees; that ethicists are no more likely to reply to undergraduates’ emails than other philosophy or non-philosophy professors; that ethicists are no more likely to phone their mothers; that ethicists are no more likely to check the “organ donor” box on their driver’s license; and so forth and so on. Haidt’s point in bringing this up, of course, is that if morals were determined by reason, then moral philosophers ought to behave a good deal more morally than other people. But they don’t.
And for fitting musical accompaniment while contemplating these behavioral facts about moral philosophers, you can listen to Nomy Arpaly’s rendition of “It Ain’t Necessarily So.”
I didn’t need to hear that. Yet I’m strangely glad that I did!
David, I want to give you some feedback on the first part of your summary/commentary, up to the ‘so much for general commentary’ part. We interpret/frame things in some different ways, and in some similar ways, and bringing these differences and similarities out might be helpful.
1. THE DUAL PROCESS MODEL. I believe that Haidt mentioned this model, but I’m not sure he explicitly said that he subscribed to it. I could not find ‘dual process model’ in the index. And I’m not entirely sure what the dual process model is (model of what – cognition?). I took Haidt to hold that: (a) it is the cognitive, not the affective or motivational, features of moral judgment (and the processes of coming to have and express moral judgment), that is important, (b) some cognitive processes of moral-judgment generation (and perhaps belief-generation are thorough-goingly automatic while some are the product of the (at least partially) agential process of reasoning, canvassing reasons for and against, giving reasons, etc. and (c) we are functionally put together in such a way that the automatic (specifically, intuitive and emotion-driven) processes of moral judgment generation generate most of our moral judgments (their job and our strong tendency) while our agential powers of reasoning – or at least a certain dominant such power – has the job of (and hence we strongly tend to) producing rationalizations for moral judgments arrived at via automatic processes. Maybe this counts as a dual process model of moral cognition – I don’t know.
2. THE RELEVANT MORAL REASONING POWER. I think it is important that Haidt fails to distinguishing different reasoning processes (that might produce moral judgment). What he stresses is that moral reasoning – or, more precisely, public moral reason-giving – is a process of “justifying ourselves to others” (and also “justifying ourselves to ourselves” as when we “internalize” this public process). We might distinguish justifying something (a belief, attitude, behavior) and justifying something *to* someone (even if this someone is oneself). The second thing, the thing the Haidt is focused on, is fundamentally social and functions to achieve certain patterns of social interaction – roughly, those that do not involve gratuitous harms to others or unfairness to others. Thus, if I step on your gouty toe accidentally, I explain or justify myself: it was an accident, I was stepping with proper care and hence not being negligent, etc. Haidt gets at this kind of thing in saying that, when we make a moral judgment of someone, we are essentially calling on the community to shun and punish them. So, he says, there is a particular sort of reason to give them reasons. None of this constitutes an adequate account of justifying oneself to others, but it gestures in the right direction. The first thing though, simply justifying one’s attitudes, actions or beliefs, is not fundamentally a public process. It is simply attempting to apply the good and appropriate rules of cognitive reasoning and decision-making to try to get true beliefs and make the best decisions with the information available. I don’t think Haidt would necessarily deny any of this, but he gloms everything together under the heading “moral reasoning” and this obscures the fact that there are different broadly reason-giving (reason-canvassing, etc.) processes at work in individuals. Though Haidt does not seem to agree with this, it is open for him – for all I have said so far about his fundamental view – to say that this other simply-justifying or simply-bringing-evidence-to-bear-explicitly type of moral reasoning functions precisely to correct errors (and can perhaps do so admirably well in the right circumstances). (This last point very much speaks to the possibility of training the elephant, as well as to the possibility of the rider/charioteer “overriding” the emotional, intuitive verdicts of the elephant in particular cases. I agree that Haidt gives unduly short shrift to both of these things. And to the agential reasoning, figuring-out, and willing powers of animals. To be fair, Haidt might plead that he is simply not concerned with these things and just means to impress us with how powerful the elephant is – and how at least one type of moral reasoning seems to have the job of rationalizing our moral beliefs and actions to others and mainly just does this.)
3. TO WHAT END JUSTIFYING OURSELVES TO OTHERS? Even if we tidy up Haidt a bit and focus solely on this particular power of moral reasoning, Haidt’s characterization of its “job” as that of “rationalizing” what we already belief morally seems a bit facile. Even if we don’t have a good, full functional account of what justifying ourselves to others is, there is a pattern to the reasons we give others – again, roughly, we are telling them why it is okay why we are doing something that they might not like or calling them out on their behavior thus inviting shunning and punishment from the community. Even if part of what we are doing is a kind of broadly self-interested, strategic rationalizing to preserve moral reputation and thus stay on the good side of the moral community, we are also giving the particular sorts of reasons indicated and these reasons are taken to have a certain kind of normative force. Formally, they are the appropriate reasons or the right kinds of reasons. We might cash this out in terms of justifying oneself to others aiming as well to achieve broad types of dyadic and individual-community relationships that constitute the moral form of cooperation. If this is right, then part of the end (or one of the ends) of this sort of public reason-giving is achieving a kind of moral order. The end here is not truth (the end of purely cognitive reasoning) nor is it making good decisions or weighing different values (the ends of practical reasoning broadly construed), but structure is analogous. This end might always be achieved or promoted via others believing that one has given adequate reason (justified oneself morally) – and this might explain, in part, why such this sort of reason-giving so often comes to be dominated by strategic moral reputation management – but the story here is much more interesting and complicated than Haidt’s picture suggests. For all this, he could be right that, in a great many social circumstances, this type of reason-giving is pretty much keyed to maintaining moral reputation, not to doing so via actually being morally good. I suspect that Haidt is at least 75% right here, but again there is a competing functional element that he gives short shift.
4. HOLDING EACH OTHER TO ACCOUNT AND EVALUATING HOW WE DO SO. I think Haidt is on-target in his thesis that the main thing that allows human beings to be the “super-cooperators” of the animal kingdom – in that we form large and intensive cooperative social structures even among genetically unrelated individuals – is our being programmed to “hold each other to account” (by shunning, punishing, disapproving of those who act this way rather than that). Haidt holds that justifying ourselves to each other is our distinctive manner of knowing how to “get around” in such a social system. He also holds that we are “soft-wired” to do the things that result in such a public system of holding each other to account – shunning and punishing rule-breakers, for example. This suggest a thesis that is close to my heart: the idea that morality is fundamentally about better and worse ways of holding each other to account (relative to both dyadic and individual-community relational ends and ends that have a distributed utility for all or all involved in the relevant cooperative endeavor). It is not fundamentally about making better or worse or the best decisions in specific sorts of choice contexts. This is somewhat speculative, but Haidt may be adopting a similar view in endorsing welfare utilitarianism as the right normative moral theory (and in interpreting Greene’s empirical philosophy intuition experiments with brain-scans and trolley problems). However, you only get a result like this if the appropriate rules to which we hold each other to account are determined solely by their extrinsic utility. But Haidt is on no firmer ground than Hume is in taking such a position.
On the dual process theory, see page 53, where Haidt explicitly refers to “controlled” versus “automatic” processes (which are usual names for the two processes in the dual process theory) and references Kahneman’s book and the “two-system perspective” (another name for the dual process theory) in an endnote. Another good source for the dual process theory is Keith Stanovich’s What Intelligence Tests Miss, an excellent book in its own right.
Haidt doesn’t emphasize the dual process theory—appropriately, since that isn’t what his book is about—but it forms the background to his theorizing. Your summary of what you take Haidt to be saying, especially points (b) and (c), seems pretty accurate to me, except that it’s not part of the dual process theory that reason is mere rationalization. Reason can take the lead in figuring things out, and in fact it has to on those occasions when the elephant has nothing to say. (For example, which of these two Lotto tickets has a better chance of actually being drawn: 01-02-03-04-05-06 or 23-11-17-03-01-30? The elephant instantly tells you the first ticket could never win—it would be too much of a coincidence—so the second is the better bet. (Whereas in reality of course the two tickets are equiprobable.) But on the other hand, what about 34×67? The elephant tells you nothing. For problems like this and lots of others, it’s reason or nothing.) Furthermore, it’s not like reason per se is invalid or anything. However, reason is hard and we tend to avoid it. We are enabled to do so by the elephant, which can make judgments and decisions on its own much of the time and often does a pretty good job. Moreover, when we do reason, we are often pretty lazy about it. But this still doesn’t necessarily make it rationalization. Of course, when it comes to moral reasoning and the justification of one’s behavior, then we do in fact seem to do an awful lot of rationalizing. And this is Haidt’s focus, obviously.
If we try to boil Haidt’s view (so far) down to a few key threads, I would suggest four:
(1) The evolved status of moral norms. Existing moral norms are the product of (genetic and cultural) evolution, and as such they are not the result of cognitive breakthroughs like the discovery of Newtonian mechanics. Of course, they evolved for a reason and so deserve respect, at least initially. On the other hand, not everything that has evolved is good.
(2) Reason is a press agent (or lawyer) for the elephant. Haidt seems to think that the major driver for the genetic evolution of reason was not so we could invent the wheel and cooking and the bow and arrow, but so that we could argue and persuade and socially coordinate with each other (p. 54). Thus, it is not a foible of reason that we spend so much time rationalizing and justifying ourselves; rather, that’s the main evolutionary function of reason. As I said in my post, from an evolutionary point of view, rationalization is not a bug, it’s a feature.
(3) Reason is lazy. This is just the point from dual process theory. Reason is hard, so we avoid doing it most of the time, even when we would clearly benefit from the exertion. This is what’s going on with the makes-sense epistemology and related points described in sections 2 (pp. 91–95) and 4 (pp. 97–99) of chapter 4. Also in his summary of Tetlock’s findings on pp. 88–89. Haidt doesn’t always separate laziness (i.e., general avoidance of controlled processing) from motivated cognition (rationalization), but they aren’t the same and should be kept separate.
(4) Reason is still reason. For all the slams against the rider given above, we shouldn’t forget (and Haidt doesn’t forget) that reason is still valid and the source of much that is good in our lives.
Although Haidt acknowledges point (4), you and I seem to be in agreement that he is overly pessimistic about it. Rather than look at these four points and conclude that we ought to make reason a more important force in our lives (as Stanovich does in What Intelligence Tests Miss, for example), Haidt seems ready to give up on reason and instead to look for environmental tricks to get us to perform better without reason (à la libertarian paternalism).
For myself, I’m still trying to figure out how much of Haidt’s view is right, and to what degree. Much of it seems on the right track, but my views on these points are still unsettled.
On holding each other to account, I don’t have anything dazzling to say about it, but I think you are exactly right that this is Haidt’s view. I expect we’ll hear more about this in subsequent parts of the book.
LikeLiked by 1 person
– I’ll have to read up more on the dual process theory of cognition, but I think we are pretty much on the same page about Haidt’s view. (One thing that gives me pause is that “controlled” processes, like explicit deliberation, are partly composed of sub-processes that are automatic. So the distinction has to be a distinction at a particular – perhaps functionally individuated – mereological level of cognitive process. At this level, some processes are purely automatic – even if they are “programmed’ by processes that are not – and some essentially involve agency.)
– Another terminology issue. I’m not familiar with the term ‘makes-sense epistemology’. Is this roughly the (descriptive) epistemology of confirmation bias?
On the dual process theory, yeah, to understand it will require more than reading my own brief and casual remarks. This makes me think of when I first began the serious study of psychology, way back when. I spent three or four weeks tearing my hair out over the concept of short term memory, trying to figure out just exactly what it meant. One trouble is that, as with concepts in philosophy, different writers have different ideas about them. Another is that theory in psychology is not as highly developed and clear as in sciences like physics—far from it—so theoretical concepts are accordingly much more vague and inchoate. Anyway, any of the sources I’ve supplied can get you started on the right track.
On the makes-sense epistemology, yeah that’s what it is. I describe it in my post (third item I discuss for ch. 4). This corresponds to Haidt’s discussions on pp. 94 and 98. Haidt doesn’t use the term “makes-sense epistemology” in the book, but he does in his 2001 Psych. Review paper. It is not a standard term in psychology or anything, just a term of David Perkins’s. I like it because it is hilarious and hits the nail right on the head.
Here is a puzzle for Haidt’s view. I am sure it is too simple and that his theory has some resources to address it, but I’m curious what you two think the appropriate response would be.
Haidt’s view is that the basic function of reason is to justify our emotional intuitions to others. Most of us also feel the need, to some extent at least some of the time, to justify our emotional intuitions to ourselves, but even that need is tied up with our need to justify ourselves to others. But — and here the puzzle begins — it is only in virtue of being rational that others are such as to demand justification from us and that we are such as to feel the need to offer it. For while we might, as more purely emotional animals, desire the approval and co-operation of others and therefore adopt various means to secure it, the forms of approval we want and the means we adopt to secure it — viz. justification — presuppose the rationality of those others and ourselves. If we were not rational, we could neither give and receive nor even want the forms of approval and co-operation that we seek via justification. The puzzle, then, is that the ends the service of which Haidt takes to be reason’s raison d’etre appear to be intrinsically rational ends. But if that is so, then it simply cannot be true that reason is basically an instrument that functions to serve those ends, because those ends themselves are already intrinsically rational. Reason could be given a strictly instrumental role only if the ends to which it is supposedly an instrument were not intrinsically rational; but no satisfactory account of the ends as Haidt understands them could fail to acknowledge their intrinsically rational character — not because Haidt takes them to be rational, but because given what he takes them to be, they are. If they were not intrinsically rational, then it would be mysterious why we want rational justifications from each other in the first place and just why it is that rational justifications satisfy us when they do. So Haidt’s theory either faces an explanatory gap or smuggles reason into the explanans when it is supposed to be the explanandum.
Of course a lot rides on how we understand ‘rational.’ We should avoid the simple mistake of conflating the rational and the cognitive. But if reason is to be distinguished from non-rational varieties of cognition at least in part by the involvement of abstract concepts and inference, then it seems difficult to avoid the conclusion that the sorts of emotional approval and disapproval that Haidt recognizes as important to human beings are at least often rational. Haidt explicitly regards emotions as at least very often cognitive, and while cognition does not entail rationality, it seems clear enough that many of our emotions depend on abstract concepts and inference. In some cases we might be able to have less complex varieties of the same emotions even without reason — fear, say, and perhaps disgust — but in other cases that seems more than implausible — shame, pride, pity, resentment, and respect come to mind. Even if we concede a great deal and grant that no type of emotion whatsoever is essentially rational in this sense, it seems plain that few of them would have the character that they do in human life if stripped down to their bare non-rational manifestations. (I’m not trying to make a case for this here that would convince anyone who doubts it; those who do might have a go at Martha Nussbaum’s Upheavals of Thought, remembering that her own view is breathtakingly strong in the degree to which she sees emotions as expressions of reason, so that one needn’t accept anything nearly so ambitious in order to get on board with the view I’m taking here). Granted, of course, that reason developed evolutionarily out of non-rational forms of emotion and cognition, it seems nonetheless that it has so infected the emotional and intuitive side of us that a theory like Haidt’s, which seeks to explain reason by appeal to non-rational emotion, is just a non-starter.
Presumably Haidt or a defender of Haidt will want to take a different view of what reason is and what distinguishes it from non-reason. But the puzzle can’t be resolved simply by using the word ‘reason’ differently — to pick out, say, conscious, deliberative thinking as distinct from unconscious, intuitive, automated cognitive processes — because then we can just reformulate the puzzle with new terminology. Human emotion and ‘intuition’ seem thoroughly imbued with abstract conceptual thought and inference in a way that all or most non-human animal psychology is not. Conceptual thought, inference, and reason-giving are tightly bound together no matter how we choose to use the words ‘reason’ and ‘rational.’ Were it not so, it would be mysterious why we want reasons and justifications in the first place, or why rational justification gains any purchase on the emotional, intuitive dimension of ourselves.
As I said, I’m sure Haidt’s view has plenty of resources to address this puzzle. But it does strike me as a puzzle for his view, and I wonder what either or both of you think about it.
LikeLiked by 1 person
Yes, certainly, rational persuasion presupposes rationality on the part of the person to be persuaded. Therefore, rational persuasion cannot be the essential function of reason. If methods of proof did not already exist, there would be no means of rational persuasion. Saying rational persuasion is the essence of reason is like saying that communication is the essential function of a representational system. That can’t be true, because if content weren’t already represented, there would be nothing to communicate.
However, Haidt’s answer to this is easy. He doesn’t think rational persuasion is the essential function of reason. Reason for Haidt is still basically a cognitive faculty, good for understanding the world, predicting the future, guiding decision making, developing new technologies, and so forth. See for example TRM, 54. Haidt doesn’t question the validity of reason or its power to demonstrate facts and make decisions. So rational standards of persuasion pre-exist persuasion, and there is no logical difficulty.
All this raises some interesting questions, though. For one, given that reason is a system of logical assessment of evidence and argument, why should reason exist? I mean, from Haidt’s sort of biological/evolutionary perspective, traits don’t just pop into existence; to evolve, they need to be good for something. So, what exactly was the use of syllogisms to hunter-gatherers on the African plains 100,000 years ago? It may be that since the industrial revolution, reason has played an important role in successful living. Nowadays, nerds who were marginalized in high school end up as elites in silicon valley and Wall Street and Washington, DC. But in previous millennia, there were no computers to program or statistical analyses to compute. So, why do we even have reason?
It strikes me that reason is an inevitable concomitant of conceptual development in a representational system that is capable of expressing propositions. Logical relations are just formalized or generalized conceptual relations. This is particularly easy to see in the case of Aristotelian syllogisms. If some of the creatures in this stream are trout and all trout are tasty, then some of the creatures in this stream are tasty. If all members of the tribe in the next valley are treacherous and this guy is a member of that tribe, then this guy is treacherous. The content of the concepts prescribes the relations. Therefore, if one understands the concepts, one has what one needs to trace the relations, and if one can’t trace the relations, then one’s understanding of the concepts is limited. Reason therefore doesn’t have to be useful to exist, only propositional understanding that employs concepts does.
A consequence of this idea is that we should understand reason as developing culturally along with our conceptual repertoires. As our concepts multiply and develop, so does reason. For example, take arithmetic. This depends on having concepts for numbers, obviously, so until such concepts are developed, there will be no arithmetic. Number concepts might seem to be an extremely elementary achievement, but I believe there’s good reason to think that before the agricultural revolution of 10,000 years ago or so, most human societies lacked well articulated number concepts, counting systems, and arithmetic. Today among the few remaining hunter-gatherer societies, this is not uncommon. See, for example, this study of the Munduruku people of Amazonia and this study of the Piraha people, also of Amazonia. The Munduruku have words for one, two, three, four, five, and many. The Piraha have only one, two, and many. Their concepts of one, two, etc. are not firm. For instance, there are circumstances where it is considered appropriate to use “one” to refer to two objects—if the contrast is to three, for example. Thus, their number concepts are like our concept of a couple. In English, a couple is most literally two, but it is often used to mean an indefinite small number more than one. These people do not count. They apply their number concepts to objects by gestalt, the way you recognize a square or a triangle. The Munduruku, who have number concepts from one to five, cannot correctly subtract two from three more than about 80% of the time. They do not employ any sort of systematic arithmetic operations. They do not have mathematical reasoning.
I would suggest that all forms of reasoning are thus culture-bound. Reasoning depends on conceptual relations, which depend on the available concepts, which are largely cultural products. Reason evolves as an inevitable by-product of cultural evolution.
We should distinguish reasoning from intelligence. Intelligence is good for much more than logical reasoning. Our brains tripled in size over the last 5 million years of our evolution. It might seem obvious that intelligence is useful, but actually it’s not so clear what intelligence is good for. In particular, since a big brain is extremely costly to operate (your brain alone consumes 20% of your calories), the question is not so much what is intelligence good for as what makes it worth the expense. Furthermore, if intelligence is so great, why hasn’t more of it evolved in other species? We need an explanation for the evolution of our intelligence that explains why the other primates didn’t bother. I have the impression that the leading theory of the evolution of intelligence is that it evolved to facilitate success in social interaction. We are a social species, and our success as a species depends on our ability to function well in groups. This means that our success as individuals also depends on our ability to coordinate with others, persuade others, manipulate others, form successful alliances, impress others, and so forth. Thus, individual success in humans depends heavily on the ability to succeed in Machiavellian social games—and this takes intelligence. This competition is supposed to have led to an evolutionary “arms race” with respect to intelligence, as we all try to outwit each other in the social realm, resulting eventually in our big brains and high intelligence.
This theory is one that might appeal to Haidt—I have no idea about that, but it puts the social function of intelligence front and center. But I’m not a fan of it, for various reasons. I am more persuaded by an alternative view, which says that the key driver of the evolution of human intelligence is the importance of hunting in our ecological niche. The theory is complex, but basically it claims that by 2 million years ago, humans had shifted to a food niche that relied on widely dispersed, high-quality calorie packages in the form of game, especially larger game. We are practically the only predator that regularly hunts creatures our own size (or larger) and that does not depend primarily on lower quality game (the young, the old, the diseased). Not to deny the importance of the “gatherer” part of the hunter-gatherer name, but most of the calories of hunter-gatherers, and the highest quality calories nutritionally speaking, come from hunting. But hunting requires great skill. You might think that young people at the peak of their physical strength and endurance would bring in the most game, but the truth is that hunter-gatherer hunters don’t reach the height of their productivity until their mid-thirties. Young men aren’t very good hunters, and teenagers are worthless. (Actually, children in general in hunter-gatherer societies don’t start to bring in more calories—in any form—than they consume until their very late teens.) Thus, hunting requires skill that takes time to develop, and it is therefore highly reliant on intelligence. There is much more to the story, but that’s the basic idea. A full account and review of supporting evidence is here, if anybody is interested.
One other question that Haidt’s attitude toward reason raises for me is how seriously we ought to take the makes-sense epistemology. Is it an exaggeration, or is it the truth? I lean toward the latter. People can perform very well at reasoning—assuming they have the ability, which a great many people don’t—under the right circumstances. Haidt summarizes these circumstances, drawing on the work of Philip Tetlock, as follows (TRM, 88):
This just seems dead right. When the named conditions don’t obtain—which is most of the time, of course—people’s reasoning is slipshod, lazy, cursory, and fallacy-prone. Of course, we can put ourselves into situations where these conditions do obtain. Professional situations, such as in academia, often do a pretty good job of this. But this is just why we rely so heavily on formal institutions of inquiry to get at the truth.
LikeLiked by 1 person
I think I was / am confused about just what Haidt wants to claim about the role of rationalization and persuasion. From what you and Michael have written, I had the impression that it was supposed to be the function of reason and that as such it was supposed to play an explanatory role in the evolutionary development of reason. That’s the part that strikes me as problematic, but from what you say now it sounds like that isn’t Haidt’s view.
It’s not enough, though, for Haidt to acknowledge that reason is, after all, as you put it “a cognitive faculty, good for understanding the world, predicting the future, guiding decision making, developing new technologies, and so forth,” or to point out that he “doesn’t question the validity of reason or its power to demonstrate facts and make decisions.” He clearly can’t deny that stuff, on pain of making nonsense of his own theorizing. But I wasn’t imagining that he did deny it; after all, to claim that X is the function of Y is not to claim that Y can do nothing other than X. Rather, I was imagining that he was denying that those features of reason enter into a functional / evolutionary explanation of it. That’s what strikes me as problematic. Both of the views you sketch – social interaction more generally and hunting in particular – seem inconsistent with what I was taking Haidt’s view to be, because both of them, and the latter most clearly, appeal to functions more basic and general than rationalization and persuasion, and do not suffer from the kind of circularity I was worried about. But I think I can see how Haidt’s view might not be inconsistent with those. After all, I have myself been keen to insist — I think we’ve talked about this in the past — that etiological accounts of function are not the only acceptable accounts of function and that something can have a biological function even if that function does not explain why that feature evolved. So perhaps consistently with one or another account of the sort you lay out, the primary function of conscious, deliberate reasoning could be rationalization and rational persuasion.
Still, once we see reason as part of a broader package in a way you suggest — I quite agree that “reason is an inevitable concomitant of conceptual development in a representational system that is capable of expressing propositions”; I think I favor a broader conception of reason such that it includes the latter rather than being simply an inevitable concomitant of it, but that seems like a relatively trivial point — the idea that rationalization and rational persuasion have much of an explanatory role to play at all strikes me as implausible, whether we’re looking for an etiological functional explanation or a systemic one. If reasoning just does serve a broader array of purposes than rationalization and rational persuasion, and its serving those purposes is not to be explained as a byproduct of its serving the latter, then the claim that rationalization and rational persuasion are a function of reason begins to seem trivial. In other words, granted that I misunderstood the sort of account Haidt is trying to give, I’m now left wondering what the big deal is supposed to be. If it all boils down to the claim that rationalization and persuasion are prominent among the things we do with reason, then we’re left, at best, with some empirically well confirmed support for a point that thoughtful observers of human beings have noted for centuries and that thinkers as diverse as Aristotle, Hume, Kant, Nietzsche, and Freud would regard as patently obvious. I’m pretty sure that can’t be the right reading of Haidt either, but I’m not sure what is.
Pardon my obtuseness. It would not be inappropriate for you to tell me at any point now to just go read the book for myself, but I have to admit that I still haven’t seen anything in your and Michael’s summaries that convinces me it’s worth the time. Your comments on it, however, are a different story.
LikeLiked by 1 person
I have several things to say in response to all this.
First, I think I have been guilty of ambiguity in some of my use of “reason.” Sometimes, “reason” refers to the tracing logical relations between evidence, propositions, and arguments. This is the use in such statements of mine as, “reason is still reason,” “reason is still valid,” etc. But most of the time, “reason” refers to the rider; i.e., controlled processing of all kinds, not just reasoning in the first sense. The second sense can be misleading and may have had a role in fomenting some misunderstanding. I don’t know if I’m doing it because Haidt does it or if I’m just doing it on my own. Anyway, it’s nearly always really the rider that Haidt is talking about, even when he says “reason”—as in, “In this chapter I’ll show that reason is not fit to rule; it was designed to seek justification, not truth” (86). Here he is obviously not talking about reason in the logical sense but about the faculty of controlled, explicit reasoning (the rider). I’ll try to be clearer in the future.
Second, to see how Haidt (as I understand him) thinks about the evolution of the rider, imagine that somehow all our social interactions—all through our evolutionary history—were conducted in a kind of courtroom. It’s always you, in the role of lawyer, and the person you are dealing with, also a lawyer, interacting before a jury of onlookers. Of course, sometimes we are the onlookers, but our adaptive success depends mainly on our performance against others in front of the onlooking jury. The jury members are relatively neutral but also not particularly attentive or necessarily the sharpest knives in the drawer. In other words, they have the power of logical reasoning but are also subject to rhetorical sleights of hand. The rider has evolved to succeed under these conditions. Therefore, the rider is not interested in truth as a rule, any more than a lawyer in court is. The rider is not interested in being forthcoming and open and fair minded. The rider is not too scrupulous about points of logic. Indeed, the rider will exploit any fallacies that will get the job done, including blatant appeals to the jurors’ emotions, sympathy, and prejudices. The rider is not interested in probing all avenues, especially when it comes to counterarguments and contrary evidence. Rather, the rider is interested in winning! Just like a lawyer in court. Winning means achieving the goals set by the elephant, the lawyer’s client. I think Haidt thinks that a courtroom is in fact a fair approximation of the evolutionary conditions that have mainly shaped the rider. (He also talks about a press secretary for a politician, but I think maybe the lawyer analogy—which is also his—is a little better.) Therefore, the rider really is much more of a lawyer/advocate than a scientist or beard stroking policy wonk. And this goes for all of us; we can’t help it. It is a delusion to think you are an exception. The only serious way to get ourselves to reason in a rational, fair, and thorough way is to put ourselves in situations where there is a high quality jury—interested in accuracy, smart, and well-informed. Formal institutions of inquiry such as those that prevail in science try to do this, which is how science achieves trustworthy results and without which science would not achieve trustworthy results.
Third, I do think this is a distinctive and interesting claim. I don’t think it is the typical way of looking at people. Right now on my neighborhood email discussion list there is a heated debate going on over a proposal by the Oakland City Council to impose a special tax on soda pop. Reading this traffic, I naturally am thinking in terms of the arguments, how clever this one is, how moronic that one is, and so on. I am thinking of the logic and focusing on the reasoning. I think Haidt wants me to stop doing that and think instead of a bunch of elephants with sharp-suited lawyers on their backs. The elephants’ stubborn feelings and intuitions are driving the show, although it’s the lawyers who do all the talking. But the lawyers are all just saying whatever they can think of to win the debate for their respective elephants. The only real hope for elevating the quality of this debate is if it were put before a sophisticated audience (and of course, there’s no chance of that, because it’s an email listserv). As for the individuals, the only chance of their changing their minds about anything is if somebody should happen to say something that strikes their elephant in the right way; i.e., that induces a new or conflicting intuition and thus some soul searching. So, when you see another person, do you see a rational agent, albeit one beset sometimes by various passions? That would be the classical image, which comes very naturally to me, I must say. But Haidt says you should see a big elephant with a little, sharp-suited lawyer on its back.
Fourth, concerning Haidt’s status as a psychologist—apropos of “what’s the big deal”—I should say that I don’t think there’s anything especially original or innovative about his thought. I’m not sure there’s really supposed to be. As far as I can see, Haidt is known for his advocacy of moral intuitionism, and that’s pretty much it as far as originality goes. He invented a set of “harmless taboo violation” scenarios, such as the story of a guy who buys a chicken at the grocery store and has sex with it before he cooks it and eats it. Disgusting, no doubt, but immoral? Stories like this prompt moral objections from lots of people who struggle to justify their moral judgments rationally, and this provides persuasive evidence that moral judgments, at least in many cases, are driven primarily by feelings, not reason. But this is about it in terms of innovative or original experiments—and he did this a long time ago. Otherwise, his chief interest is as a big picture guy. He is very good at drawing together the strands of what’s going on in social and cognitive psychology and integrating them into a coherent large-scale picture of the human situation. (He’s also a good communicator.) For instance, and in particular, he was able to see that the state of play in cognitive psychology implies that the moral rationalism that prevailed in moral psychology in the 1980s and 1990s is off base, and he argued for this very effectively. But he didn’t invent that state of play in cognitive psychology. He just applied it to moral psychology. Of course, that is a significant achievement! But all this stuff about the elephant and rider we’re talking about right now—his only contribution is the metaphor of elephant and rider. He didn’t invent any of the theory.
Fifth, I can tell you my own motives for reading TRM. First, I want the opportunity to contemplate the elephant/rider metaphor more, and especially its moral intuitionist application. This is definitely giving me that, which wouldn’t be happening nearly so well without your input, so thank you for that. I would say don’t stop commenting just because you haven’t read the book. Second, I’m interested in learning his ideas concerning the source and nature of moral intuitions. We haven’t gotten to that part of the book yet, and so have not had a chance to discuss it. Do we have innate “moral emotions”? Or are all moral intuitions culturally learned? What determines them? Why I care about this should be evident from “Morals and the Free Society,” especially the section on Hayek. I think Hayek would be cheering Haidt on all the way, especially if Haidt’s ideas about the sources of our moral intuitions turn out to chime with Hayek’s. Third, I am interested in Haidt’s thoughts concerning religious and political controversy and how to handle it, especially in personal confrontations. Especially in politics, Haidt has gone to the trouble to try to understand the basic moral intuitions that drive the beliefs of both left wingers and right wingers. Skimming ahead in the book, it looks to me like he has done a good job of this. I am looking forward to learning what he has found, particularly with regard to its implications with how to address each others’ elephants. I have spent too much of my either seeking or avoiding political confrontation. There’s got to be a better way, and maybe Haidt can teach me something about that.
LikeLiked by 1 person
That’s very helpful. Thank you. My main gripes still seem to stand, though.
“In this chapter I’ll show that reason is not fit to rule; it was designed to seek justification, not truth.” Assuming that “designed” is, as usual, a metaphor for natural selection, then this seems to be a claim that justification is the function that explains why human beings have reason. Presumably that implies that its other functions are derivative. I don’t think that’s plausible at all, and I struggle to see how it makes much sense, whether or not we take ‘reason’ to pick out the rider or a broader capacity or set of capacities. What I still don’t see is how the faculty of controlled, explicit reasoning can play the justification role if we are not already such as to be moved by rational justifications, and to be such as to be moved by rational justifications requires that we already be engaged in some level of rational thought. I also doubt whether truth can be merely a byproduct. Even if we suppose that there is no truth in moral matters, the faculty of conscious, controlled, explicit reasoning and the practice of offering rational justification apply to non-moral matters as well, and if it were not reliably truth-tracking in some domain, then it would seem likely to get us into pretty severe trouble rather than to offer us some advantages by virtue of which it would be selected for. Nor does it seem as though reliable truth-tracking can be a mere coincidental feature of the faculty even if we suppose that it is never more than instrumentally beneficial, such that it is only ever put to use in the service of fundamentally non-rational motives. Of course, the domain in which reason is reliably truth-tracking need not be especially wide for the purposes of evolutionary explanation; in that respect it may well be quite right to say that reason is not designed for truth (I suspect that it isn’t and that this is one reason why certain areas of science and philosophy are so outrageously hard). But I have a hard time seeing how reliably tracking truth in a certain limited domain could be incidental to how human beings have managed to survive. It is one thing to hold that the evolutionary explanation and natural function of reason is not such as to make us all natural theorists ruled by our intellects; it is another to say that truth does not enter into it at all, and quite another thing to say that justification is the evolutionary function of conscious, controlled, explicit reasoning — even if we grant that truth has to play some sort of role.
I also still don’t see this view as all that distinctive and interesting, or as an unusual way of looking at people. Hume and Nietzsche have broadly similar views, and while Kant, Aristotle, and Plato obviously hold much higher views of reason, their discussions of ordinary human psychology should not lead us to expect that reason plays anything more than an instrumental, rationalizing role in most people. Perhaps Kant would want to resist most strongly, since he seems to think that virtually everyone really does on occasion grasp moral truth via an exercise of pure reason. But Plato and Aristotle sure as hell don’t. For Plato and Aristotle, the only people in whom reason rules are philosophers, and philosophers are a rare bunch; for Plato, anyway, at least in some moods, very few people are even born with the capacity to live philosophical lives, and then their societies tend to screw them up and ensure that they don’t. Plato would, like Aristotle and Kant, disagree with Haidt’s overall view (and not just because none of them believed in evolutionary theory), but they would not tell us to expect that most people’s reasoning is much more than a tool of their passions. In that respect, I think it’s quite wrong to say that the classical image of rational animals is of rational agents that are just sometimes beset by various passions.
But I’m repeating myself now, and I’m pretty sure you haven’t misunderstood me, so I’ll stop beating the horse, even if it isn’t dead.
A further point, though, is that if Haidt really means to infer that there aren’t and can’t be exceptions, then he’s either making an elementary logical mistake or he’s generalizing hastily. It’s an entirely empirical matter, but I suspect that it’s certainly true that we are all susceptible to putting our non-rational intuitions in the driver’s seat at least some of the time. That we can never do anything else unless we are being held to account by a high quality jury is hardly a compelling conclusion even if we accept the main lines of what you’ve reported of Haidt’s theory, and it simply doesn’t fit my experience. That we all do better when pressed to justify our beliefs to others certainly fits that experience, but that we are all otherwise determined to reason only in ways that confirm our intuitions does not.
I am also not sure that Haidt’s terminology of “rationalism” vs. “intuitionism” is very helpful. Of course it would be silly to object to it on the grounds that these terms are used very differently in philosophy. But there is no clear reason to suppose that intuitions as such must be non-rational in anything other than the narrow sense in which they are not the immediate products of conscious, controlled, explicit reasoning. What seems much more important is whether or not the emotions that these intuitions are tied up with are cognitive. From a cognitivist, moral realist point of view, there is nothing strange about the idea that our basic grasp of what is good and bad comes via intuitive emotional response rather than abstract reasoning, just as there is nothing inherently odd about supposing that our basic grasp of what exists comes via perception rather than abstract reasoning. With all due respect to the formidable philosophers who have held rival views, both of those seem equally plausible to me. Hence my intuitive (!) response to the experiments in which people cannot offer plausible defenses of the claim that it is morally wrong to fuck a dead chicken is that this no more shows that it is irrational to believe that it is morally wrong to fuck a dead chicken than most people’s inability to offer a plausible defense of belief in the external world or the existence of tables shows that it is irrational to believe that there are mind-independent objects and that tables are among them. I don’t mean to be overlooking the distinction between what is in fact reasonable to believe and the rationality of the process by which we come to believe it, either; I mean that these beliefs, as held by most people most of the time, strike me as the products of reason functioning well. (Apologies for the crass language, but ‘to have sex with’ seems singularly inapplicable to dead animals insofar as ‘with’ involves some kind of mutuality, and some crass acts can only really be described in crass language – though I suppose my thinking so might be evidence that my moral views are just expressions of my non-rational revulsion!).
But all my gripes are entirely consistent with there being lots of interesting and valuable stuff in Haidt’s book, not the least the things you’ve mentioned. So I’m looking forward to hearing more, even if I’m not sold on the overall theoretical framework.
LikeLiked by 1 person