Here is the seventh chunk of the argument. To return to the sixth chunk, click here. To advance to the eighth chunk, click here. The complete essay is posted here.
Robert Frank (1988) could hardly be accused of attempting to provide a moral vision for a free society, but he makes a case for one way of resolving the moral contradiction of the free society. He attempts to show how a seemingly selfless adherence to the moral principles that support the efficient operation of the free market might ultimately be justified in egoistic terms after all. The basic strategy is to reap the long term benefits of playing by free market rules by foregoing the short term gains that can be made by breaking them. Of course, this depends on finding other agents who also obey the free market rules—and enabling them to find you. Otherwise, as Frank shows, the strategy will be undercut and ultimately defeated by rule breakers.
How this strategy works can be illustrated by the case of honesty. Honest behavior is economically selfless on those occasions when one could gain by dishonesty (for example, perhaps by not paying the bill of a supplier who is about to go bankrupt or the bill of a small contractor who can’t afford to sue). Now, suppose you committed yourself to a policy of strict honesty. If others knew this, they would have reason to prefer doing business with you over others, to give you easier credit, etc. For, they could be confident that you would not rip them off; i.e., impose costs on them through dishonesty. In North’s terms, doing business with you would lower their transactions costs. Thus, by foregoing the occasional rip off, you reap the rewards of doing more business on better terms. And notice, by the way, that even if other people adopt the same honesty strategy, thereby undercutting your “market edge,” your terms of doing business will still be better. Transactions costs are still lowered, even if everybody becomes completely honest (indeed, they are lowered even more).
Of course, this works only if people know you are completely honest. And how are people to know this? Frank suggests two mechanisms, reputation and emotional signaling. Reputation is basically the record of your past behavior. Learning this entails a transactions cost, but not necessarily a particularly high one. A potential problem with reputation as a sign of honesty is that if a dishonest person is sufficiently clever, he will only exploit only “golden opportunities”—situations where the chances of one’s dishonesty being detected are very low—and remain honest in all other situations. If dishonest people could maintain this strategy, reputation would have little value. However, Frank argues that people typically do not have the discipline to restrict their dishonesty to golden opportunities. Therefore, people who are dishonest will usually in fact have bad reputations. By the same token, people with good reputations will usually have a strong general disposition to honesty, one that leads them to be honest not only when the chances are good that dishonesty would be detected, but in golden opportunities as well.
Frank thinks a general disposition to honesty is mainly a matter of one’s emotional constitution. One is prone, for whatever reason, perhaps somewhat by nature but especially by socialization, to feel bad about dishonesty. One maintains honesty, then, because the material incentive to dishonesty is counterbalanced by the emotional painfulness of dishonesty. The fact that honesty is maintained by emotional incentives lies at the heart of the second process whereby one’s commitment to honesty can be made known to others, emotional signaling. The idea is that emotions are hard to mimic. Actors and others can learn to do a fair job of imitating various emotional expressions reasonably well with talent and lots of practice. But this takes deliberate effort. For most people, emotional states are not that easy to fake. If this is so, then the emotions associated with lying might be difficult to mask, those associated with sincerity difficult to simulate. And in that case, people might know of one’s commitment to honesty by being able to “judge character.”
To support this theory, Frank presents the results of an experiment he performed in which participants met and chatted with one another in groups of three for half an hour before playing, for real money, a simple prisoner’s dilemma game. Each participant would play twice, once with each of the other two in his group. The players’ choices, “cooperate” or “defect,” for each game were kept completely private and anonymous. Even the payouts were partially randomized so that no participant could infer later what choices his fellow players had made. The participants were told at the start that they would finish by playing the prisoner’s dilemma game. The half hour of chat before playing enabled the players to get to know each other, size each other up, even talk about their feelings and ideas about prisoner’s dilemma games. Then, before playing, each participant made predictions about the other two participants’ choices. The results showed reasonably good accuracy. Even after only a half hour with completely anonymous strangers, participants predicted cooperation with 75% accuracy (base rate: 68%) and defection with 60% accuracy (base rate: 32%). The accuracy of defection predictions is particularly impressive: Since only 32% of players defected, 60% accurate predictions is nearly twice the rate that would be expected due to chance.
It isn’t just honesty. Frank analyzes certain other moral impulses similarly. For instance, the desire for retribution. As with honesty, there are occasions when it does not pay to exact retribution. For example, suppose I have a $200 leather briefcase. If you were to steal it, I could press charges, but the hassle of doing so and going to court would cost me $300. You are about to leave town, and I will never see you again or have any future dealings with you anyway. In this situation it is economically irrational (it is the sunk cost fallacy) to pursue punishing you if you should steal my briefcase. But there is a particularly obvious downside to economic rationality in this case, which is that, if you know that I am an economically rational person (and know the pertinent facts in this case), and if you are an economically rational person (meaning, in this situation, unscrupulous—see preceding remarks re honesty) then you would get a free briefcase and I would be your patsy. You would be deterred only if you had reason to think I would commit the sunk cost fallacy and pursue you for retribution instead of just buying a new briefcase and getting on with my life. Since pursuing punishment in this case cannot be justified economically, my doing it would have to be motivated by lust for revenge or for righteous punishment. If I were prone to this lust and you could sense it, you would be deterred from stealing from me. The ironic benefit, of course, is that the deterrent effect of my penchant for revenge would mean I would rarely need to act on it. And this is an economic benefit of my uneconomic behavior. By deterring violations of my property rights, I spare myself the need either to punish a thief or buy a new briefcase. Again, to put this in North’s terms of transactions costs, a society with less stealing is a society of reduced transactions costs and correspondingly greater market efficiency. Of course, this economic benefit accrues only because of my (and others’) penchant for a certain form of economically irrational behavior.
Frank emphasizes the economically irrational element in always pursuing honesty and punishment (and certain other moral principles). In certain situations, being honest and exacting punishment require sacrificing material gains. It may be that these sacrifices are made up for in the long run—Frank argues that this might generally be the case and that its being the case has led to the genetic evolution of certain moral emotions—but there is no guarantee of this, and it will almost certainly not hold true for all agents.
More importantly, if long-term material rewards are to accrue, the commitment to foregoing material rewards in certain short-term situations must be genuine. There can be no second guessing, when these situations arise, whether to follow through on one’s commitment to being honest or pursuing punishment. For, if one considers the material rewards in these situations, they will impel one to be dishonest or forego punishment. And this will mean that one’s commitment is fake. But fake commitments will not produce the looked-for long term benefits. People will not trust you if they think you are only honest as long as you can’t benefit from dishonesty or fear you if they think you only seek revenge when it is not economically costly. The strategy requires that people believe that your commitment to honesty and punishment is genuine. The only way they are likely to believe that is if your commitment to honesty and punishment is genuine, as evidenced by emotional and behavioral signals that are very hard to mimic.
Thus, ironically, the strategy for securing long term material benefits requires that you genuinely not care about those benefits as opposed to certain moral values. So, certain genuine moral commitments, distinct from the egoistic material reward seeking of the free market, can be justified ultimately in terms of egoistic material rewards. Frank seems to have shown that the free market itself rewards and thus justifies certain nonegoistic moral commitments.
Frank’s derivation of moral commitments from the egoistic values of the free market is ingenious. He succeeds in providing a reason why an egoistic utility maximizer should want to make nonegoistic moral commitments and an explanation of the role of these commitments in the operation of the free market. And this is what we asked for. Furthermore, his solution seems potentially comprehensive in that every way in which transactions costs can be reduced through moral commitments might be covered by his strategy—though this has not been shown.
If there is any reason to be unhappy with Frank’s approach, it is that it is reductive in what seems to be the wrong direction: moral values of honesty, respect for property, and so forth, are reduced to material values of health and wealth, not vice versa. This comes out in several ways. For instance, as Frank acknowledges, an alternative to the moral commitment strategy is to become good at mimicking moral commitment and exploiting the opportunities for safe and profitable wrongdoing that one thereby encounters. Certain aspects of human psychology might make the mimicking strategy difficult for most people to pull off, as Frank argues (1988, ch. 8), but probably there will be people with the needed talents, and in any event it is from the view of Frank’s theory a merely technical question. From this view, to be a good mimic able to rip people off effectively would be a good thing for the mimic. Moreover, as we have noted before, the mimicking strategy gets easier as the free market becomes more efficient. The more trustworthy, law-abiding, forthcoming, and amiable are the agents in the free market, the less point there is in going to the expense of background checks, credit checks, security guards, vaults, and so on. In the limit of perfect market efficiency, these disappear altogether. And as such safeguards decline, so does the cost of mimicry, to the point where agents practically invite rule violations. And on the view of Frank’s theory, rule violations in such a situation are the appropriate response for many agents. Inevitably such agents will emerge as the free market becomes more efficient, forcing people to be more guarded and market efficiency to correspondingly decline. This process fluctuates until an equilibrium position is reached in which none of the morally committed agents can do better by switching to the mimicry strategy and none of the mimicking agents can do better by switching to the moral commitment strategy. And in this equilibrium, none of the mimickers has any reason, on Frank’s premises, to change.
The problem is that although the moral commitment embraced by those who pursue the moral commitment strategy must be genuine, nevertheless ultimately the only values recognized on this view are those of material reward. The moral commitment strategy remains ultimately a sneaky way of maximizing material rewards in the long term. Therefore its normative force is contingent on its ability to actually do this. By the same token, this view finds no place for intrinsic, nonmaterial rewards. As McCloskey asks in her brief discussion of Frank, “What about human flourishing, beyond bread alone?” (2006, 414). A moral vision for a free society should explain the place of this as well.
- Frank, Robert H. 1988. Passions within Reason: The Strategic Role of the Emotions. Norton.
- McCloskey, Deirdre N. 2006. The Bourgeois Virtues: Ethics for an Age of Commerce. University of Chicago Press.
I agree with the criticism of Frank in the last two paragraphs of your post. In fact, the last time I read Frank, probably in grad school, those criticism occurred to me as well, and ultimately dissuaded me from finishing the book. The objections seemed so obvious, and so obviously subversive of Frank’s project, that I didn’t see the point in reading to the end (and don’t think I ever did).
I’ve always wondered, though, about another objection to Frank–an epistemic one. Outside of the “laboratory,” so to speak, I’ve never quite understood how he thinks we actually make the inferences we’re supposed to make about others’ moral character. In other words, how do I know (or come to know) that you are following a policy of strict honesty? The answers are supposed to be that I rely on reputation and emotional signaling, but is that really plausible?
On reputation: Does badness of reputation really track badness of character? In a rough way, maybe, but Frank seems to be claiming something stronger than what would be borne out by a rough correspondence between the two things, and I’m just skeptical of the truth of what he’s saying. I’m curious what the evidence is, and how good you think the evidence is. (Some of the evidence may well be cited in the book. I own Frank’s book but haven’t read it for awhile, and of course, the book itself was published almost thirty years ago.)
Likewise, the idea of emotional signaling seems to suggest that we can “read” someone’s character in a quasi-perceptual way off of our personal encounters with them. I guess my question here is: Really? Again, I’m skeptical. “The only way they are likely to believe that is if your commitment to honesty and punishment is genuine, as evidenced by emotional and behavioral signals that are very hard to mimic.” It’s not just a matter of how hard they are to mimic, but how hard they are to read. I teach about 100 students a term. I get about 5 excuses a day from students who haven’t done this reading or that assignment. Some of these excuses are sent to me in electronic form, some are tearfully made in person. Are my students lying or telling the truth? I have no idea. Nor have I been helped by time or experience. It’s gotten to the point where whether they are lying or telling the truth has become a matter of indifference to me. Instead of improving my skills at reading others’ characters, I’ve set up “strict liability” policies that absolve me of the responsibility of distinguishing between lying and truth-telling altogether.
The issue goes well beyond students. It applies to everyone I deal with. Sometimes I can tell when I’m being bullshitted, but those cases are the exception that proves the rule. I could just be “on the spectrum,” but for the most part, I have no idea how to read someone’s moral character from casual interactions with them. (In fact, hasty generalization of this sort seems to militate against fairness, itself an important moral value in social contexts.)
Two literatures have emerged since the publication of Frank’s book that are relevant to what he says, but seem to cast doubt on it. I mention them not because I have a real command of either literature (much less a worked out view on what they say), but because a defender of Frank’s view would need to address the claims of these literatures in order to make Frank’s claims epistemically plausible.
One is the literature on fundamental attribution error. Here the idea is supposed to be that we’re not able to do what Frank thinks that we so easily do: ascribe counterfactually stable traits like honesty to people on the basis of everyday interactions with them. (Classic paper: Gilbert Harman’s “Virtue Ethics and the Fundamental Attribution Error.” Also relevant and worth reading: Roderick Long’s “Why Character Traits Are Not Dispositions” [PDF]).
The other is the literature on inferences from perceptually apparent to moral traits, where a standard claim is that we have a strong bias for attributing goodness on the basis of judgments about morally irrelevant traits, like physical attractiveness (“the halo effect,” “what is beautiful is good”). (Classic readings: Karen Dion et al, “What Is Beautiful Is Good,” Journal of Personality and Social Psychology ; Judith Langlois et al “Maxims or Myths of Beauty? A Meta-Analytic and Theoretical Review,” Psychological Bulletin , PDF; and Deborah Rhode, The Beauty Bias: The Injustice of Appearance in Life and Law .) I’m curious what anyone thinks about this.
LikeLiked by 2 people
Excellent comment. You make lots of great points, most of which I have no real idea how to answer on Frank’s behalf.
It should be noted that what Frank is trying to explain is how we could have come to be genetically programmed to genuinely care about certain moral values—honesty, punishment, loyalty, and fairness—as a result of Darwinian natural selection during our hunter-gatherer prehistory. Moreover, he explicitly rejects group selection as a possible mechanism (ironically on the authority of E. O. Wilson, who later would become an advocate of group selection). Thus, he needs a reason why “selfish genes” would evolve partially selfless traits. The theory of emotional signaling (and reliability of reputation) is designed as a solution to this problem. So he is trying to solve the right problem—the need for genuine morality—within a pretty limited framework.
The main value of the book for me is the brilliance of its analysis of the general situation leading to recognition of the need for genuine morality. I think it’s unfortunate that the book is not better known and its lessons not better learned.
The specific emotional signaling mechanism is supposed to be that we have some combination of biological emotional predispositions (such as for revenge) and social conditioning (honesty is probably mainly a matter of this), which become firmly embedded in our character so that our behavior becomes nervous and unnatural when we try to fake them. For instance, blushing when lying.
On the one hand, it wouldn’t surprise me to learn that people are not actually very good at telling when people are lying. (I don’t know of any particular empirical evidence about it.) On the other hand, I don’t know how important that really is (despite the way it is highlighted in Frank’s account). What matters is whether people can generally size other people up as to whether they will make good business partners or formidable enemies or hard bargainers, etc. It doesn’t seem implausible to me that people can often do this pretty well. The reason I describe Frank’s trust experiment in my paper, where people do a reasonably good job of predicting whether their partners in a prisoner’s dilemma game will cooperate or defect, is that it goes directly to this issue. But I haven’t followed up to see whether more experiments of this kind have been conducted.
I wouldn’t be too worried about the fundamental attribution error, the halo effect, and other such findings. The effects are weak—all social science effects are weak—and thus leave room for their contraries. (Incidentally, it’s been a long time since I read the Harman paper, but I seem to recall that he lays heavy emphasis on findings in personality psychology (due to Mischel) to the effect that supposed personality traits are dominated by situational factors; in other words, that people’s behavior is governed by environmental context and that personality traits have almost no predictive value. But Harman’s paper is old, and things have changed in the psychology of personality.)
Interestingly, economist David Rose, in his relatively recent book, The Moral Foundation of Economic Behavior, follows Frank very closely—he seems to regard his theory as a sort of Frank 2.0—but entirely abandons the emotional signaling idea. Instead, he regards moral values as entirely a matter of culture. He avoids the need for agents to identify trustworthy individuals by making trustworthiness societal. That is, the basis of an individual’s trust is that most members of his society are in fact trustworthy, because that is the culture. This fits new institutionalism in economics with its emphasis on informal institutions (such as cultural mores), their impact on economic performance, and their consequent (supposed) ability to explain the economic success of some societies and failure of others. Cultures of trustworthy individuals are supposed to evolve by a process of group level selection (cultural, not genetic), à la Hayek as described in my paper. So perhaps Rose really does retain the good in Frank and jettison the weak parts.
LikeLiked by 1 person
Incidentally, your criticism of Frank’s reductiveness reminded me of a letter to the editor I recently saw in The New York Times. Here’s your criticism:
I won’t cut and paste the whole letter, but here’s a link to it. The writer makes a fair point against the person he’s criticizing, but what’s interesting is his implicit equation of a person’s “best interests” with “monetary subvention from the government,” or more generally, “money.” In other words: when comparing any two policies, the one that’s more in your “best interests” is the one that puts more money in your bank account, regardless of any other consideration.
Pingback: Morals and the Free Society: 6a. Addendum on Cultural Group Selection | Policy of Truth
Pingback: Morals and the Free Society: 8. Ayn Rand | Policy of Truth
Nice discussion here all-around. I have only one thing to add: Frank’s view, in essentials, is similar to Derek Parfit’s idea that we might have egoistic reasons to have non-egoistic motivations (and similarly moral reasons to have non-moral motivations). This sort of point is interesting but it does not squarely address the conditions for our having non-instrumental reason to be moral (e.g., follow a strict rule of honesty)…
As indicated in some of my other comments, I think the Parfitian phenomenon is in fact relevant to specifying the conditions for our coming to have intrinsic reasons. Relative to the right basic affective, conative, and human-functional (and perhaps circumstantial) background conditions, reasons to have intrinsic concerns are reasons to do things that will result in one coming to have (additional) intrinsic reasons.
Yes, I agree. Actually, Gauthier touches on this also, although without making such a big point of it. But it’s no accident, I think, that Gauthier speaks of one giving oneself the disposition to respect the rights of others as a condition of entering their society. In other words, one trains oneself somehow to be intrinsically motivated to respect rights. (And the reason to do this is to enjoy the fruits of living in society.) I argue that this won’t work in my critique of Gauthier way back in Part 3 of the essay.
LikeLiked by 1 person