In chapters 7 and 8, Haidt describes in detail his account of our innate “moral foundations”—a relatively small set of fundamental psychological mechanisms that underlie and produce our moral intuitions. In previous chapters, he has argued that moral judgment is driven primarily by moral intuition—that the intuitive dog wags the rational tail—and that our moral intuitions cover more areas of life than just harm and fairness. It is now time to get specific. Just what are these fundamental, innate sources of moral intuition, and how can we show that we really have them?
This topic is important because if there is really a serious case to be made that (a) our moral judgments are driven largely by our moral intuitions, and (b) our moral intuitions are strongly shaped by genetically hardwired “modules” that first evolved in our Pleistocene past, then our moral intuitions are cast in a new light. For one thing, any status they may have been thought to have as final arbiters of moral questions is drastically undercut. Intuitions on this view turn out to be, not repositories of cultural wisdom, much less oracular voices of moral reality, but rather crude behavioral impulses that facilitated survival and reproduction among our hunter-gatherer ancestors; for example, by inhibiting us from killing our children or mating with our siblings or eating bacterially infected meat. Given such origins, these impulses should not be trusted implicitly; they need to be evaluated by some other, rationally defensible standard. For another thing, if certain moral judgments result in some important part from the influence of genetic moral modules, it is important to know exactly which they are and something about their evolutionary status and origin, in order to evaluate those moral judgments. For example, recall Hayek’s claim (seconded recently by David Rose in The Moral Foundation of Economic Behavior) that the morality of “solidarity and altruism” is a genetic atavism appropriate to our hunter-gatherer past (and appropriate to family life still today) but totally inappropriate as a standard for the “extended order” of market-based, large-scale societies.
At the outset of his discussion, Haidt makes an important qualification to his claim that we have innate “moral foundations.” Namely, we should not think of the genetic fixation of the moral foundations in a naively inflexible way, as though the inheritance of a module for Loyalty were like the inheritance of blue eyes. A Loyalty module can’t be like blue eyes, because what loyalty means and how it is expressed is in large part culturally determined. Loyalty to one’s group may be a universal feeling, but how and when and how much it is expressed differ from one culture to another. There may even be some cultures where it is largely suppressed. So, Haidt says, we should be wary of the “hardwired” metaphor. “Prewired” might be better. Or again, he advises us to think of the role of genetics in determining brain functioning as like writing the first draft of a book. The brain does not come into the world tabula rasa, but nor is it written out complete. Instead, the genetic code writes a first draft of the brain, and subsequent drafts and rewrites take place throughout development.
I will describe the moral foundations that Haidt proposes, commenting as I proceed. I don’t find all of them equally plausible. I also think that in certain cases he may be running together propensities that should be regarded as separate modules. (I should say for the record that I am not a fan of “module” talk, which Haidt adopts from evolutionary psychology, of which I am also not a fan. But since nothing in his theory depends on there being literally any such modules—as Haidt acknowledges—his module talk may be taken as metaphorical, and I will say no more about it.)
The Care/Harm Foundation. Haidt names each moral foundation with a single word expressing its essence followed by a second term naming a contrast. So care—concern or compassion in response to suffering—is the essence of the Care/Harm foundation, implying an abhorrence of harm. Haidt argues that Care/Harm originally evolved to motivate mothers to care for their children. This is why we respond to cuteness, especially of children, with tender feelings. As genetically and culturally evolved and articulated, it motivates caring and kindness in general. Through the influence of culture, its triggers can be expanded vastly beyond concern for children to encompass everything from baby seals and beached whales to famine victims half way around the world. I have no particular comment on Care/Harm. It seems like a slam dunk. Goetz, Keltner, and Simon-Thomas mount an extended defense of compassion as an innate moral emotion in this Psychological Bulletin article, and they are very persuasive.
The Fairness/Cheating Foundation. Haidt initially interprets the fairness impulse (following Robert Trivers) as centered on reciprocity. “You scratch my back and I’ll scratch yours.” What is fair is that, if you have scratched my back, I return the favor at some point. Cheating is taking advantage of the things others do for you and not reciprocating. Haidt argues that Fairness/Cheating evolved to enable people to take advantage of the benefits of cooperation. We evolved to play tit-for-tat in the Robert Axelrod sense: be initially cooperative but maintain a sharp eye for cheaters and punish or at least withhold future cooperation from them. Though he doesn’t explicitly refer to it, Haidt seems to allude in his talk of looking out for cheaters to a classic study by Cosmedes and Tooby (“Cognitive Adaptations for Social Exchange”). His discussion does not make clear whether he takes Fairness/Cheating to include willingness to cooperate even when there is little reason to expect reciprocation and willingness to punish even when the cost of doing so is greater than any personal benefit one can expect to derive. There is good reason to think that cooperation cannot evolve without these latter impulses, which make people into what Herbert Gintis and Samuel Bowles (and their co-authors) call “strong reciprocators,” but they are not endorsed by Trivers, Axelrod, or Cosmedes and Tooby. From its roots in reciprocal cooperation, Fairness/Cheating expands to encompass people getting what they are due more generally, whether this is construed as equal or proportional shares, with people who are thought to be getting unfair shares typically accused of some form of cheating.
As I said, the above is Haidt’s initial account of Fairness/Cheating. It is the way he looked at Fairness/Cheating as recently as his 2007 paper (with Craig Joseph), “The Moral Mind.” But in 2008, he decided he had made a hash of it. He was especially impressed with feedback from readers in which, on the one hand, justice concerns seemed to have little to do with equality, and on the other hand, concerns about equality seemed to have little to do with reciprocal cooperation. So he modified the theory. First, he changed the emphasis of Fairness/Cheating to drastically reduce concern with equality and enhance the element of proportionality. Fairness/Cheating still has its roots in reciprocal cooperation, but its subsequent elaboration is now in the direction of “just deserts”—that people should reap what they sow. It is the idea of karma. Moreover, Haidt took to heart research showing that Trivers’s essentially self-interested concept of reciprocity is inadequate to explain large-scale human cooperation. In particular, without an intrinsic desire to punish cheaters, large-scale cooperation has little hope of being sustained. Thus, innate righteous anger against people who take without contributing assumed greater importance in the Fairness/Cheating foundation. (In other words, Haidt moved toward the concept of strong reciprocation.)
Second, he added a new foundation, Liberty/Oppression. The essential impulse here is hatred of bullies. Haidt notes that among chimpanzees, alpha males aren’t really leaders. They don’t provide services to the group, they just take what they want and back up their privileges with brutality. And they aren’t respected, only submitted to, and it sometimes happens that their subordinates team up and defeat and even kill them. The early ancestors of humans may well have been similar, but gradually among them an equalitarian social practice evolved. This may have been stimulated by the development of tool use, especially weapons such as spears, which greatly reduce the effect of mere brute strength in determining the outcome of a fight. Another factor that may have been important for promoting egalitarianism is the development of language, which enables punishment to take the relatively safe and effortless form of gossip and whisper campaigns, and which would have facilitated coordinated action of the kind needed to gang up on a bully and ostracize or even kill him. Thus may have evolved the egalitarianism for which hunter-gatherers are famous. And however it evolved, the ethnographic evidence seems clear that hunter-gatherer egalitarianism is a fact (see Kelly, The Lifeways of Hunter-Gatherers). Hunter-gatherers do not permit any person to get too high, presumptuous, or full of himself without a take-down. As I say, Haidt interprets the essential impulse here as freedom from oppression and hatred of oppressors. Thus, Haidt thinks that the concern of contemporary libertarians and some conservatives with government “tyranny”—infringement of personal liberty—is rooted in the Liberty/Oppression foundation. But so is the concern of contemporary liberals with equality and “social justice,” which sees the roots of inequality in the predatory and oppressive behavior of elites (and believes that capitalism is a predatory and oppressive system). Liberal concern with racial discrimination, colonialism, glass ceilings, etc., also jibes well with the Liberty/Oppression foundation, and indeed seems much better explained as emanating from Liberty/Oppression than from a primary impulse to equality.
This is Haidt’s account of Fairness/Cheating and Liberty/Oppression as refined in The Righteous Mind, but I think there are still problems. Particularly with regard to Fairness/Cheating, it seems to me that he is mistaking concern for norms and norm violation with concern for proportional justice. Just because people take certain behaviors as norms and desire to enforce those norms does not mean they believe in karma or that norm adherence constitutes justice in any stronger sense, and it certainly doesn’t mean they believe that rewards should be commensurate with contributions. The latter idea—which is what I mean by proportional justice—seems relatively abstract and advanced, actually, rather than innate and primitive. I don’t know that there’s much psychological evidence that people just take naturally to the idea. Justice as equality, on the other hand, does have some backing as an innate primitive. Alan Page Fiske, in his book, Structures of Social Life—one of Haidt’s mentors, whom he still cites frequently—makes a strong case for what he calls “Equality Matching” as a basic structural principle of human relationships. Equality Matching is suitable for governing relations among peers. It consists of requiring more or less exact parity in decision making, division of work and goods, gifting, favors done and received, and so forth. Thus, Equality Matching requires that everyone have an equal say in decisions; democratic voting; equal distribution of goods; turn taking for goods that can’t be divided; assignment by lot of indivisible rewards, honors, responsibilities, jobs, etc.; that gifts or favors be returned in kind and in equal amounts; and so forth. Fiske argues that although Equality Matching is not applied to all aspects of social life in any society, it is applied to some in practically every society. It thus has a claim to be a human universal. Also, young children adopt Equality Matching as a norm, becoming obsessed with making exactly equal divisions of cookies, chores, and whatnot. Haidt acknowledges this but argues that children grow out of it and gradually adopt proportionality instead. However, this seems weak, especially if his evidence for this comes from WEIRD children. If proportionality is cultural, late childhood is exactly when we should expect it to be adopted.
Thus, an innate equality structure along the lines of Fiske’s Equality Matching seems more plausible than an innate proportionality structure. I say “structure,” by the way, because what is in question here is something more cognitive than emotional. Equality Matching is a concept, not a feeling (and this would be true of proportionality as well). This is different from Care/Harm, which does seem to be primarily a feeling (compassion). Thus, not all moral foundations have the same psychological make up.
But equality (or proportionality) is not all that is at issue in Fairness/Cheating. I said that I think Haidt mixes up the justice issue with norm psychology. Norm psychology—treating procedures as norms, getting angry when they aren’t obeyed, desiring to punish norm violators, and so forth—is ubiquitous and important, and it is not just about justice or reciprocity. People treat the right way to make a basket or process tubers or draw a face as normative, get angry when such norms aren’t obeyed, attempt to correct or instruct violators, etc. This is a critical human psychological trait that explains how it is that beneficial cultural practices can be maintained even when, as is often the case, people do not fully understand (or do not understand at all) the benefit they produce. As such, it is a critical element in the explanation of how our species accumulates elaborate and sophisticated culture. (For more on this, see the work of Joseph Henrich; for example, his new book, The Secret of Our Success. For a single study showing that children as young as 2 years of age grasp and attempt to enforce conventional norms as such, see Rakoczy et al., “The Sources of Normativity.”) So, norm psychology is not a moral foundation, but it is a foundation, and it is a sine qua non of moral thinking. I think it deserves a special place on the list for this reason. A lot of the behavior that Haidt describes as being rooted in Fairness/Cheating I would say is really due to norm psychology.
I have less trouble with Liberty/Oppression than with Fairness/Cheating, but I would be more comfortable if Haidt presented more hard evidence for its reality. The evolutionary story about the taming of alpha males and our species’ progressive self-domestication (taken from Boehm, Hierarchy in the Forest) makes sense, but I would like to see some experimental evidence that narrows down exactly what this impulse is supposed to be. In particular, it seems to me that oppressor hatred and the leveling impulse aren’t the same thing. Just from personal introspection, I have no problem believing that oppressor hatred is a strong impulse, and possibly pretty basic. (But that’s not what I would call hard evidence!) And all by itself this can probably account for most of what Haidt attributes to the Liberty/Oppression foundation. But this is not the same as the leveling impulse that I associate with hunter-gatherer egalitarianism. The latter is a desire to pull down anyone who gets too proud, too high, too successful; it does not require the target to have his boot on someone’s neck. It seems to me that we can see evidence of such an impulse in certain sorts of behavior that are common in our society, and I have often wondered whether it might not be explained as a genetic holdover from our hunter-gatherer prehistory. But of course, I’m just wondering; this would have to be investigated.
The Loyalty/Betrayal Foundation. This is the impulse to maintain in-group cohesion and identity; to identify with one’s own family, clan, tribe, team, nation, race, profession, club, or whatever; to reduce differences with in-group members and exaggerate differences with out-group members. Haidt cites the famous Robbers Cave study as evidence, and there’s lots more where that came from. Loyalty/Betrayal produces the morality of in-group loyalty (think Cosa Nostra), patriotism, certain forms of military heroism and personal sacrifice, and demonization of traitors.
The Authority/Subversion Foundation. This is the impulse to respect and legitimize hierarchical relations of authority. Authority/Subversion makes us hypersensitive to signs of respect and disrespect, obedience and disobedience, submission and rebellion. Authority/Subversion produces the morality of respect for authority and noblesse oblige. Such moralities are evidenced in military organizations and some families, and in the social relations of many cultures. It can be difficult for WEIRDos to recognize hierarchical relations as anything but power relations, which are thus to be regarded as inherently exploitative and oppressive and anything but moral (see the Liberty/Oppression foundation). However, hierarchical relations aren’t necessarily exploitative (however much superiors in a hierarchy may sometimes take advantage of their position). Such relations, where legitimized, are a two-way street. The higher ranking persons have authority, but they also have responsibility (for norm enforcement, dispute adjudication, aid and care for subordinates who are on hard times, and other forms of leadership). As Haidt says, “people who relate to each other in this way have mutual expectations that are more like those of parent and child than those of a dictator and fearful underlings” (167). The benefit, when this works well, is coordinated action. It is hard to see a military organization working as a democracy. Haidt’s evidence for all this is mainly to cite Fiske’s anthropological work (and some work with chimpanzees). I think Fiske makes a pretty good case.
The Sanctity/Degradation Foundation. As the Care/Harm foundation is rooted in the emotion of compassion, Haidt says that the Sanctity/Degradation foundation is rooted in the emotion of disgust. Originally, disgust evolved to help our ancestors solve the “omnivore’s dilemma” of finding a proper balance between willingness to try novel foods and wariness of foods that are not yet proven safe. It may also have evolved to motivate ground dwellers in fairly large groups to care more about hygiene and sanitation. A significant fact about disgust is that it is transferrable by contact: what has been touched by a disgusting object or person becomes itself disgusting. Thus, disgust originally evolved to motivate people to avoid things that are likely to be pathogenic. Its basic triggers are things like rotting flesh, feces, scavengers such as vultures, and open sores. But its triggers can be culturally expanded to include many things, including out-group members and people low in the social scale (“the great unwashed”). Disgust is the emotional foundation of the moral ideas of pollution, stain, miasma. It is also (according to Haidt), paradoxically, the ultimate source of our sense of the sacred. For, the idea of pollution suggests its contrary, purity. The sacred is the pure, that which must be kept from pollution and degradation at all costs. It is the infinitely valuable. When we speak of the sanctity of human life, the Sanctity/Degradation foundation is in action. Sanctity talk is in decline in the West. There are still Westerners who think of virginity as sacred, for example, but they are outliers. However, as Haidt points out, we can still see Sanctity/Degradation at work in biomedical debates over abortion, physician-assisted suicide, and stem cell research. If the only moral principles driving people’s moral judgments were utilitarian concerns about suffering or preference satisfaction, or deontological concerns about rights or autonomy or the infliction of harm, it’s hard to see how there could be much controversy over these questions. But there is serious moral controversy—the abortion debate is about much more than simply whether a fetus can feel pain—because Sanctity/Degradation intuitions are potent, even in the West.
This brings us to the end of Haidt’s list of moral foundations, but I want to suggest that there may be one more: Fiske’s Communal Sharing. This is the concept of all working together for a shared purpose, without keeping particular track of who contributes what or who removes what from the common pool. Many married couples operate this way, and nuclear families more generally, as well as probably many small shops and undertakings. Think of kids building a tree house. Fiske finds that in small-scale societies, such as the Moose (pronounced MOH-say) that he studied in Burkina Faso, Communal Sharing structures many more activities than we are accustomed to in the West, including most of the farming (and they are an agrarian people). In “The Moral Mind,” Haidt and Joseph consider Communal Sharing to be encompassed under the Loyalty/Betrayal foundation. But as Haidt currently explains Loyalty/Betrayal, it is about maintaining in-group identity, unity, and cohesion—it’s about loyalty—not sharing. There can be strong in-groups without Communal Sharing, and there can be Communal Sharing without a very cohesive group (as in the tree house example). Communal Sharing is also not the same as concern with equality. The idea of equality implies separate individuals who require equal shares, equal turns, an equal say, and so forth. Communal Sharing is just the opposite, since it requires that shares not be counted and at its extreme hardly recognizes people as individuals, as opposed to members of the communal group. Meanwhile, there is good reason to think that helping and sharing is a basic human impulse. Haidt spoke in an earlier chapter of the work of Kiley Hamlin, who found that infants and toddlers spontaneously prefer agents who are helpful and dislike agents who hinder others. Also, children as young as their second year of life spontaneously help others (for example, by picking up and providing a reached-for object), share food, and provide information (Warneken and Tomasello 2009). Moreover, their motives for such behavior are not for extrinsic rewards such as praise or cookies, nor are infants so much motivated to provide help themselves as they are to see that the other person is helped (Hepach et al. 2013). Thus, pitching in and helping communally, without counting how much any particular individual gives or takes, is a natural mode of social relation. In many different contexts, determined differently from one culture to another, this is thought of as “right” behavior and keeping tabs is thought of as mean-spirited. Surely this sort of concept explains the impulse to set up utopian communes, large and small. I also think it explains a great deal of charitable giving (as a principle separate from compassion). It also seems to underlie moral principles such as, “If someone sues you for your shirt, give him your coat also.”
So Haidt has six moral foundations: Care/Harm, Fairness/Cheating, Liberty/Oppression, Loyalty/Betrayal, Authority/Subversion, and Sanctity/Degradation.
I myself would suggest a more expansive list, including as many as nine: Care/Harm, Equality Matching, Norm Psychology, Liberty/Oppression, Social Leveling, Loyalty/Betrayal, Authority/Subversion, Sanctity/Degradation, and Communal Sharing.
One last point ought to be mentioned, which is the application of these principles to politics. Haidt devotes a lot of space to this. His most important point is that social conservatism is impossible to understand without the help of Loyalty/Betrayal, Authority/Subversion, and Sanctity/Degradation. It is conservatives who stress duty to country, patriotism, respect for parents and for leaders, the sanctity of life, of marriage, and of the family. Liberals have a way of dismissing these principles, and social psychologists have a—rather shameful, actually—history of trying to use their “science” to pathologize their conservative political opponents, rather than take their claims at face value and attempt to understand them on their own terms. One of the more salutary features of Haidt’s work is that he calls out this behavior for what it is and urges psychologists to do better. Haidt’s description of a 2008 essay he wrote deserves quoting at length:
I titled the essay “What Makes People Vote Republican?” I began by summarizing the standard explanations that psychologists had offered for decades: Conservatives are conservative because they were raised by overly strict parents, or because they are inordinately afraid of change, novelty, and complexity, or because they suffer from existential fears and therefore cling to a simple worldview with no shades of gray. These approaches all had one feature in common: they used psychology to explain away conservatism. They made it unnecessary for liberals to take conservative ideas seriously because these ideas are caused by bad childhoods or ugly personality traits. I suggested a very different approach: start by assuming that conservatives are just as sincere as liberals, and then use Moral Foundations Theory to understand the moral matrices of both sides.
Haidt’s basic analysis is that the moral view underlying contemporary liberalism relies almost exclusively on just three moral foundations: Liberty/Oppression, Care/Harm, and to a lesser extent, Fairness/Cheating. The moral view underlying conservatism, by contrast, makes heavy use of all six. The use is not always the same, of course. For instance, as noted above, what is taken to constitute oppression differs considerably between liberals and conservatives. And there are variations of emphasis. For example, the Care/Harm foundation is more important to liberals than to conservatives. Nevertheless, the morality of conservatism draws from a considerably richer palate of moral tastes than does liberalism. Liberalism sounds just one or two moral notes, and this, Haidt argues, puts liberalism at a disadvantage when it attempts to appeal to everyday people, especially less educated people who haven’t had most of their moral taste buds cauterized by NPR and university courses in left wing ideology. He recommends that liberals try to broaden their appeal.
He presents data from over 100,000 survey participants in the U.S. that shows very persuasively that endorsement of the Care/Harm and Fairness/Cheating foundations is positively linearly related to endorsement of liberal ideology, while endorsement of Loyalty/Betrayal, Authority/Subversion, and Sanctity/Degradation is positively linearly related to endorsement of conservative ideology. (This experiment was done before Liberty/Oppression had been distinguished from Fairness/Cheating and Fairness/Cheating had been converted from a concern with equality to a concern with proportionality.) The slope of the lines for Loyalty/Betrayal, Authority/Subversion, and Sanctity/Degradation are twice as steep as those for the other two foundations, and all five lines more or less converge to the same level at the “Very Conservative” end of the scale (Fig. 8.2, p. 187). In other words, conservatives employ all five of the measured moral foundations at a reasonably high level. The difference with liberals is that liberals use Care/Harm and Fairness/Cheating (understood as equality) even more, and they don’t use the other three foundations much at all.
Finally, it’s worth noting that Haidt’s system makes crystal clear the way in which libertarians are different from conservatives. For, libertarians of course don’t typically care about the Loyalty/Betrayal, Authority/Subversion, or Sanctity/Degradation foundations any more than liberals do. But they share with conservatives a relaxed attitude toward Care/Harm. They thus show a distinct profile on Haidt’s set of moral foundations. They also suffer, from a PR perspective, in attempting to promote their views in the public sphere. Even more than liberals, they are lacking in easy and natural ways of hooking people’s moral intuitions. This may also be why it is so easy to portray libertarians as amoralists. Haidt and colleagues published a long study on libertarian morality fairly recently, which I hope to comment on soon.
Thank you, David. I am responding mainly to the first part of your summary, in which you write about “moral intuition.” I am not very familiar with this idea from philosophers, e.g. Michael Huemer. (But see below). However, I am reading Rationality & the Reflective Mind by Stanovich. (Thanks for mentioning it.) Stanovich presents a tripartite division of mind – autonomous, algorithmic, and reflective. Perhaps what Stanovich calls the algorithmic mind helps explain moral intuitions. As children we all learn moral lessons from our parents and others seen as authorities. These lessons we learn help form an algorithmic mind about moral issues. Your review also describes The Authority/Subversion Foundation. It and some of the other foundations seem quite compatible with developing an algorithmic approach to moral matters. In the A/S Foundation section Haidt even says, “people who relate to each other in this way have mutual expectations that are [ ] like those of parent and child”. What might seem to be moral intuitions might be the product of the algorithmic mind.
I haven’t read Huemer’s Ethical Intuitionism. But I peeked at it using Amazon’s Look Inside feature, specifically searching for algorithm. I saw the following: “[T]he dependence of moral properties on non-moral ones does not entail the existence of an algorithm for computing moral verdicts from non-moral facts.” Thus the kind of algorithm Huemer means is far different than Stanovich’s algorithmic mind.
LikeLike
Hi Merlin.
You make some interesting comments, though I’m not sure how you connect intuitions with Stanovich’s algorithmic mind. In reply, I will say something about philosophers’ intuitions versus Haidt’s use of the term, then I’ll describe how I see Stanovich’s tripartite model in relation to Haidt.
Philosophers, it seems to me, have a way of according “intuitions” epistemic worth. It isn’t just moral intuitions, either. They do this with “intuitions” in general. The epistemic worth so accorded seems to take two broad forms, which I alluded to in my post, that have different degrees of seriousness. The strong, serious form holds that there is a dimension of reality to which we have a special cognitive link through intuition. A nonmoral example would be David Lewis’s claim that we have direct knowledge of other possible worlds through intuition. For Lewis, talk of other possible worlds isn’t metaphorical. They really exist and are just as real as the actual world. All that distinguishes the actual world from other possible worlds is that the actual world is the one the knowing subject happens to be in. And for a person in some other possible world—say, the world in which Napoleon won the battle of Waterloo and French is the world’s second language, not English—his world is actual and ours is merely possible. Since all possible worlds really exist, but we have experience only of our own, “actual” world, how do we know about the others? There really is no answer to give except that we just have special cognition of them through intuition. And so Lewis held that when we evaluate the truth of counterfactual statements, such as, “If Napoleon had won the battle of Waterloo, French would be the world’s second language, not English,” we do so by consulting our intuitive knowledge of the space of other possible worlds. A moral example of the same sort of thing would be G. E. Moore’s idea that we have special knowledge of a non-natural quality, the good, by intuition. Moore held that the good is real, but it is not accessible by ordinary empirical means; therefore, we must have special cognitive access to it through intuition. I get the impression Huemer may have a similar sort of theory, but I don’t know that; I haven’t read him either.
However, mostly philosophers don’t make such strong claims for intuition. Usually “intuition” just means a statement one feels strongly inclined to endorse, even in the absence of any particular evidence or argument. A non-moral example would be the intuition one may have about Mary, the genius, know-it-all color scientist who has spent her whole life in a black-and-white room and never seen red. On the day when she escapes from her room and sees something red for the first time, would she learn something new, namely what the color red is like—something all her black-and-white book learning about color could not have told her? It might seem obvious that the answer is yes, but really we haven’t done the experiment and although yes might feel like the right answer, we don’t actually know. Maybe all she would learn is how to recognize visually something she already knows everything about intellectually. You can imagine this with geometric shapes, for instance. A person with severe cataracts might learn geometry without being able to see squares, triangles, and so forth. On the day of his cataract surgery, when he acquires sight at last and sees a square shape for the first time, he may learn a new skill—namely, how to recognize a square visually—but he doesn’t learn anything fundamentally new about shapes. He doesn’t learn what a square is like; he already knew that.
I didn’t mean to go on so long about these examples. The point is that most talk of “intuitions” among analytic philosophers is of this latter kind. They talk about their intuitions concerning Mary’s new color experience, and concerning whether “zombies” (creatures without qualia experiences) could exist, and about whether we truly follow conceptual rules (for instance, rules of arithmetic) as opposed to merely following behavioral routines, and about whether one should pull the switch in the trolley problem. Note that only the latter concerns morality. And although they don’t make strong claims of the Lewis/Moore kind about the nature of these intuitions, they nevertheless have a way of talking as though these intuitions are at least potentially decisive. And there are theorists who advocate for this. For instance, Gilbert Harman, in his book Change in View, argues that we have a right to our existing beliefs in the absence of positive reason to doubt them. Thus, if you find that you believe something pretty strongly—such as that you should (or should not) pull the switch and divert the trolley—you have an epistemic right to that belief to the extent that you can answer any positive objections to it. You don’t need a positive reason in favor of it; the fact that you hold it is sufficient to regard it as epistemically warranted. Again, Stephen Stich, in The Fragmentation of Reason, talks as if the methodology of analytic philosophy consists pretty much entirely of massaging intuitions into a coherent system. He doesn’t offer a theory of why intuitions have epistemic merit, but if he thinks philosophy is the pursuit of knowledge, which he seems to, then he must think that intuitions can be a foundation of knowledge.
Now, I think Haidt’s own talk of intuitions is completely devoid of epistemic implications. Haidt treats intuitions in the weaker way I described above, as opinions one may be strongly inclined toward, even in the absence of evidence or argument, but he is under no illusions that such opinions are per se epistemically warranted. Indeed, he is in the business, as a psychologist, of trying to find the psychological sources of such intuitions, sources which are probably not very strongly related to their being true.
So the main takeaway here is just that philosophers’ talk of intuitions tends to have pretensions to truth and knowledge, and Haidt’s doesn’t. For Haidt, “intuitions” are just the psychological phenomenon that people somehow have strong opinions about, say, moral questions, which they find hard or impossible to rationally justify. Haidt does not assume that such opinions must nevertheless somehow be a guide to truth. If anything, he assumes rather the opposite: intuitions as such may or may not be faithful guides—this has to be evaluated by appeal to some further, rationally defensible standard.
Concerning Haidt and Stanovich, Haidt says that the source of moral intuitions is “the elephant,” his metaphor for the fast, automatic, effortless system of emotional and associative processing, as opposed to the slow, controlled, effortful system of explicit reasoning. Stanovich calls the former Type 1 processing, the latter Type 2. Type 1 processing is conducted by what Stanovich calls the “Autonomous Mind.” Type 2 processing is split between the “Reflective Mind” and the “Algorithmic Mind.” So in Stanovich’s terms, Haidt would say that intuitions are a product of the Autonomous Mind. Therefore, when you speak of the Algorithmic Mind being a source of moral intuitions, I’m not really sure what you’re getting at. Maybe I’m misunderstanding you somehow.
Interestingly, Stanovich divides Type 2 processing into the Algorithmic and Reflective because he finds that raw processing power— basically, IQ—is not correlated very strongly with rationality. The distinction can be illustrated if you think about playing a game of chess. The mental power you have to look three or four moves ahead in the game, keeping in mind all the different possible replies and your answers, is supplied by the Algorithmic Mind. It is basically your working memory capacity. On the other hand, if you are in the habit of forming a plan of action and then going ahead with it immediately without looking very hard for flaws, that would be a failing of the Reflective Mind. Whereas the opposite habit, of looking for errors in your strategy, would be a virtue of the Reflective Mind (a more rational way of proceeding). The Reflective Mind consists of just those dispositions in which rationality and irrationality consist. The dispositions are one thing, the raw processing power is another. This is how Stanovich tries to explain the lack of correlation between IQ and rationality.
LikeLiked by 1 person
Thanks again, David. You wrote: “So in Stanovich’s terms, Haidt would say that intuitions are a product of the Autonomous Mind. Therefore, when you speak of the Algorithmic Mind being a source of moral intuitions, I’m not really sure what you’re getting at.”
My first post was quick one. Considering what you say, I revise my conjecture to say that the source of moral intuitions could be the autonomous mind or algorithmic mind. Stanovich doesn’t say much about the autonomous mind in terms of what sort of perceptions or ideas it deals with. Regarding moral claims, people often respond quickly and/or non-reflectively, whether that be emotionally or algorithmic-style based on some ideas they have absorbed or adopted intuitively during their personal history.
LikeLike
It is probably relevant to distinguish between (a) having the belief that P and (b) it seeming to one that P is true. Many, perhaps most, philosophers, take [b] to constitute or necessarily correlate with having some positive justificatory or rational status (constituting a kind of reason to believe that P, however strong or weak). I think fewer believe that simply believing that P (with, say, a net neutral or even negative evidential/motivational tone or seeming-status) constitutes or entails having reason to believe that P.
It is commonly believed that (i) we could not get the business of justification off of the ground without accepting intuitions. And that, (ii) it is equally necessary to use intuitions on an on-going basis in justifying beliefs, testing hypotheses, etc. This is consistent with (iii) treating intuition-based reasons as pretty weak – and requiring direct or indirect evidence of the reliability of the intuitive feel or intuition-forming process, not merely justification absent evidence of reliability, in order to get the kind of high-octane justification required for knowledge. Indirect evidence of reliability would include weighting one’s intuitions (or the intuitions of others) in accord with the degree of expertise that one (or another person) has on relevant topics (and the degree to which the person’s thinking is not warped by non-epistemic bias). Given [i] and [ii], our evidence for and beliefs about reliability will essentially rely on intuitions (and the reasons or justificatory force that they generate). But any given intuition is subject to reliability-type vindication or its justificatory force might well be pretty weak. This picture – which I accept and which allows for variety of different more and less “pro-intuition” or “anti-intuition” views, depending on how important different sorts of reliability-type justification are – is not done justice by the common wrong, confused or just not-adequately-precise notion that justification is “ultimately a matter of bringing intuitively-justified beliefs into a coherent whole” (unless you stipulate the ‘coherence’ refers, in large part, to strong-evidence-of-reliability type vindication). Anyhow, that’s my general-level “state of the art” on this matter, for what it is worth.
Stanovich’s distinction (in Type-2 processing) between algorithmic and reflective rational processes strikes me (there’s an intuition!) as broadly correct and useful.
LikeLike
Thanks, David. That was quite informative and illuminating. For now, just a brief comment or two…
(a) Haidt admits that his “foundations” may include multiple more-basic elements (“modules”) that work together to meet an adaptive challenge. So I think he would be open to the idea that a finer-grained functional understanding of the foundations is in order (perhaps resulting in a slightly different view of how many foundations there are and how they should be characterized). He should also be open to the idea that the conceptual and emotional psychology of these mechanisms (including those of “norm psychology” that functions to enforce general compliance with rules in a group) is often quite important and flexible.
(b) I’m unclear how Haidt’s functional individuation of his foundations (or their more-basic elements) goes, but a “formal” or “procedural” element (like norm psychology or, more generally, the fact that we have social-reactive attitudes toward each other) seems quite important. At least initially, Haidt seems to have glommed it (in particular, the formal/procedural elements of norm psychology) together with the “substantive” or “trigger-to-behavioral-output” issue of what is responded to (proportional reciprocity, equal sharing, etc.). You separate this formal/procedural element out (in the case of norm psychology) but also put it on all fours with the other, substantively-individuated elements (foundations, modules). I’m not sure this is right. It seems that, when the formal/procedural element is this flexible, it is the content-neutral motivational-behavioral mechanisms are functionally important. I think we are largely on the same page here, David, but I would emphasize that, by my lights, the functional analysis needs to be more fine-grained and multi-leveled (calling out these procedural/formal elements as distinct and important being one dimension along which we need to have more detail). Unfortunately, other than this particular suggestion, I don’t really know the more-detailed functional analysis should go!
(c) I find it fascinating that, for some of these foundations, the initial trigger is pretty close to what, intuitively, morality endorses (e.g., proportional reciprocity) – while in other cases, while the procedures/mechanisms like disgust-responses are part of moral thinking (if Haidt is right and he is), morality is not much or at all concerned with the initial triggers (e.g., thinking of them as morally bad in correlation with the primitive averse response). It would be nice to have an explanation of this (though, of course, having one requires dipping our toes into the normative waters).
LikeLike
Hi Michael.
You’re right, of course, that Haidt takes his “foundations” to be complex and include possibly many sub-modules. But the thing is, I don’t like that! I think it is too mushy. I think we should try to be a bit more precise about just what psychological mechanisms are being proposed, and I also think we should rely more on experimental evidence than Haidt does. To my mind, Haidt relies too much on evolutionary stories about how his foundations supposedly originated. On the one hand, this is good, because evolutionary stories, if believable, can provide good guidance concerning the functional role of—and an interpretive lens for—a given trait or characteristic. On the other hand, even the most plausible evolutionary stories retain a whiff of the just-so story. Haidt is alive to this problem, obviously, but that doesn’t make it go away. An evolutionary story is not a substitute for hard evidence showing that a proposed psychological mechanism really exists. So the modifications I suggested are intended to make the theory a little more precise and falsifiable and grounded in experimental evidence. I think at this point in Haidt’s theory, this is what we want: less of a programmatic framework and more of a real theory.
I agree it’s interesting that the different foundations work differently, including in the way you mention (that some seem very close to their substantive moral content, others fairly far removed from it). But otherwise I don’t have anything special to say. I think this is perfectly okay. There’s no reason to expect every source of intuition to be structurally or procedurally alike. The elephant is stocked with many different kinds of processes with diverse origins. This is one point the evolutionary psychologists are surely right about.
LikeLiked by 1 person
Thank you for this. I’ve found Haidt’s moral foundation theory very useful, if imperfect. Your exposition and improvement is most appreciated. That connected a few dots for me.
I hope you will entertain a thought from me while looking at “Understanding Libertarian Morality”. According to Haidt’s model, the logical terminus of a libertarian pattern of moral psychology would be to flatline the chart of moral foundations – with one conspicuous exception, which is that libertarians show high psychological reactance. This makes sense to me: if you’re low across all foundations, you might (or might not) reach libertarian conclusions on a cognitive basis, but it makes little sense to invest in a libertarian identity. Because when dealing with bullies, the most instrumentally rational response isn’t to reflexively push back. It’s to remain calm and cultivate good relations with the authority figure. If your instinct is to push back, an emotion is motivating you. If you push back so much that you invest in an identity, take risks, commit resources, and dealbreak social relations, then a lot of emotion is probably motivating you.
My question is: what exactly *is* the emotion involved in psychological reactance? (And is this issue identical with the Liberty/Oppression moral foundation?) Please forgive my layperson’s ignorance here; there may be professionals reading who can casually address this. My personal guess would be some combination of anxiety and anger. I know that I’m personally reactive on a narrow range of triggers and these closely correspond to the political issues on which I’m motivated to initiate social conflict. And they’re clearly threat responses (they don’t trigger unless either the speaker or the issue feels salient to my life; otherwise my emotional response to people who carry dominance is positive). Meanwhile, people in my circle with otherwise similar moral foundation patterns but with higher anxiety thresholds find my tendency to take anti-authoritarian stands senselessly irrational (and, cognitively, i agree with them).
Incidentally, I can verify Haidt’s claim that libertarianism correlated with low responses to a wide range of moral sentiments from the other direction. Libertarians are in fact conspicuously overrepresented in the low affect community, including at least three of the few public writers. My apologies if I think critics of capitalism, libertarianism, and “the 1%” actually have a not entirely incorrect (if grossly stereotyped) narrative on this issue. Markets reward materialists and rational actors and punish sentimental attachment. Meritocratic societies with fluid social mobility have arguably misfired Enlightenment humanist idealism; they seem to replace socioeconomic dominance by traditional elites with socioeconomic dominance by whomever is most motivated and competitive at socioeconomic dominance. John Ralston Saul’s Voltaire’s Bastards tells an interesting story. One may not personally highly object to this outcome, but it does seem completely reasonable if other people do. If human neurodiversity spans irreconcilable preferences in favourable social conditions then politics reduces to a superficially civilised permanent civil war.
LikeLike
I very much like your last sentence!
LikeLike
Thank you so much! 🙂
Political conflict is a tiny room with too many people and one television screen. We’re fighting over who gets to pick the channel, set the volume, and tint the colour. And if what we like to watch really is just about what arbitrary stimuli make our brains go to happy places. then we are all philosophically screwed. There is no way out of this room. Welcome to postmodernity. The Sophists were right. Life is a tale told by neurotransmitters, full of oxytocin and dopamine, signifying nothing. Now please give me the remote.
LikeLike
Hi Alice. Good to hear from you again.
I think your question is quite interesting, but I doubt I’ll be able to say anything profound about reactance when the time comes. I hope that will be soon, but I’ve got an APA paper to prepare and other chores standing between me and commenting on Haidt’s libertarian morality piece. Maybe next week.
Incidentally, my own angle on Haidt and libertarianism is mainly the thought that all of Haidt’s moral foundations—even Liberty/Oppression—are intuitive. If Hayek (and Henrich and Boyd and Richerson and…) is right that the morality of the “extended order” (which would be libertarian morality, I suppose) is a purely cultural development, then it makes sense that libertarians should have a flat profile on Haidt’s measures—but that wouldn’t make libertarians really amoral. It would only show that there are non-intuitive sources of morality after all.
LikeLiked by 1 person
Thank you, sweetie. Actually on reactance I got off my hiney and did my research. Reactance is anger. Otherwise. please don’t let me impose. Business first, always.
So, did I win that bet? While I was there I took all the tests at yourmorals.org. They say my life satisfaction is slightly better than average and my self-esteem, mental health, and objectivity of self-image are way better. On the other hand it looks like I can’t taste whole categories of stuff which people report makes them very happy, to the point where I honestly don’t understand lots of the questions (How much do you feel gratitude when…? Wait, gratitude is a feel??). I’m kinda wowed to see hard data that humans do closely feel what they preach, even if IME they don’t practice it much outside the in-group. Score one for morality!
LikeLike
I thought I’d mention, in passing, that a colleague at Felician stopped me in the hallway today to mention and praise our Potts-Young series on Haidt. Evidently, she teaches Haidt’s books in one of her classes here. She mistakenly thought I had written the series–but no worries, I set her strhaidt on that.
LikeLiked by 1 person
I’m grhaidtful that you did. This just shows that you never know who might happen to read one of these. I tend to think there’s an audience of maybe three, so it’s good to see that that isn’t (always) true.
LikeLike
There is at least an audience of four. I may have decided that it’s in both of our best interests for me to stop trying to offer substantive commentary, but that doesn’t mean I’m not reading. So between Michael, araziel, Irfan, and me, that’s four — plus there’s merjet and apparently an unknown quantity of silent but appreciative readers. Count me among the now (mostly) silent but appreciative ones!
LikeLike
Thank you, David. I appreciate your appreciation!
LikeLike
Of interest regarding moral intuitions and how/whether they are based on emotions (link from recent PEA Soup post by Joshua Knobe): http://repository.upenn.edu/cgi/viewcontent.cgi?article=1129&context=neuroethics_pubs
And here is the link to that post. The comments are well worth reading through (good up to date information on which results, some of which Haidt relies on, have and have not been replicated, which sorts of results have been confirmed or disconfirmed by meta-analysis, etc.). It is encouraging to see that social scientists are taking the non-replication problem seriously and starting to sort the wheat from the chaff.
http://peasoup.typepad.com/peasoup/2016/08/suppose-you-are-sitting-at-yourdesk-reflecting-on-a-moral-question-now-suppose-that-as-you-are-reflecting-on-this-question.html#more
LikeLike
Pingback: Inter-categorical: Here there be Monsters – Marcus Vorwaller
Pingback: “Leftists are Motivated by Self-Interest and Envy, Not Compassion: The Evidence” a rough transcript of a video by Edward Dutton – The Orthosphere