Robert Frank (1988) could hardly be accused of attempting to provide a moral vision for a free society, but he makes a case for one way of resolving the moral contradiction of the free society. He attempts to show how a seemingly selfless adherence to the moral principles that support the efficient operation of the free market might ultimately be justified in egoistic terms after all. The basic strategy is to reap the long term benefits of playing by free market rules by foregoing the short term gains that can be made by breaking them. Of course, this depends on finding other agents who also obey the free market rules—and enabling them to find you. Otherwise, as Frank shows, the strategy will be undercut and ultimately defeated by rule breakers.
How this strategy works can be illustrated by the case of honesty. Honest behavior is economically selfless on those occasions when one could gain by dishonesty (for example, perhaps by not paying the bill of a supplier who is about to go bankrupt or the bill of a small contractor who can’t afford to sue). Now, suppose you committed yourself to a policy of strict honesty. If others knew this, they would have reason to prefer doing business with you over others, to give you easier credit, etc. For, they could be confident that you would not rip them off; i.e., impose costs on them through dishonesty. In North’s terms, doing business with you would lower their transactions costs. Thus, by foregoing the occasional rip off, you reap the rewards of doing more business on better terms. And notice, by the way, that even if other people adopt the same honesty strategy, thereby undercutting your “market edge,” your terms of doing business will still be better. Transactions costs are still lowered, even if everybody becomes completely honest (indeed, they are lowered even more).
Of course, this works only if people know you are completely honest. And how are people to know this? Frank suggests two mechanisms, reputation and emotional signaling. Reputation is basically the record of your past behavior. Learning this entails a transactions cost, but not necessarily a particularly high one. A potential problem with reputation as a sign of honesty is that if a dishonest person is sufficiently clever, he will only exploit only “golden opportunities”—situations where the chances of one’s dishonesty being detected are very low—and remain honest in all other situations. If dishonest people could maintain this strategy, reputation would have little value. However, Frank argues that people typically do not have the discipline to restrict their dishonesty to golden opportunities. Therefore, people who are dishonest will usually in fact have bad reputations. By the same token, people with good reputations will usually have a strong general disposition to honesty, one that leads them to be honest not only when the chances are good that dishonesty would be detected, but in golden opportunities as well.
Frank thinks a general disposition to honesty is mainly a matter of one’s emotional constitution. One is prone, for whatever reason, perhaps somewhat by nature but especially by socialization, to feel bad about dishonesty. One maintains honesty, then, because the material incentive to dishonesty is counterbalanced by the emotional painfulness of dishonesty. The fact that honesty is maintained by emotional incentives lies at the heart of the second process whereby one’s commitment to honesty can be made known to others, emotional signaling. The idea is that emotions are hard to mimic. Actors and others can learn to do a fair job of imitating various emotional expressions reasonably well with talent and lots of practice. But this takes deliberate effort. For most people, emotional states are not that easy to fake. If this is so, then the emotions associated with lying might be difficult to mask, those associated with sincerity difficult to simulate. And in that case, people might know of one’s commitment to honesty by being able to “judge character.”
To support this theory, Frank presents the results of an experiment he performed in which participants met and chatted with one another in groups of three for half an hour before playing, for real money, a simple prisoner’s dilemma game. Each participant would play twice, once with each of the other two in his group. The players’ choices, “cooperate” or “defect,” for each game were kept completely private and anonymous. Even the payouts were partially randomized so that no participant could infer later what choices his fellow players had made. The participants were told at the start that they would finish by playing the prisoner’s dilemma game. The half hour of chat before playing enabled the players to get to know each other, size each other up, even talk about their feelings and ideas about prisoner’s dilemma games. Then, before playing, each participant made predictions about the other two participants’ choices. The results showed reasonably good accuracy. Even after only a half hour with completely anonymous strangers, participants predicted cooperation with 75% accuracy (base rate: 68%) and defection with 60% accuracy (base rate: 32%). The accuracy of defection predictions is particularly impressive: Since only 32% of players defected, 60% accurate predictions is nearly twice the rate that would be expected due to chance.
It isn’t just honesty. Frank analyzes certain other moral impulses similarly. For instance, the desire for retribution. As with honesty, there are occasions when it does not pay to exact retribution. For example, suppose I have a $200 leather briefcase. If you were to steal it, I could press charges, but the hassle of doing so and going to court would cost me $300. You are about to leave town, and I will never see you again or have any future dealings with you anyway. In this situation it is economically irrational (it is the sunk cost fallacy) to pursue punishing you if you should steal my briefcase. But there is a particularly obvious downside to economic rationality in this case, which is that, if you know that I am an economically rational person (and know the pertinent facts in this case), and if you are an economically rational person (meaning, in this situation, unscrupulous—see preceding remarks re honesty) then you would get a free briefcase and I would be your patsy. You would be deterred only if you had reason to think I would commit the sunk cost fallacy and pursue you for retribution instead of just buying a new briefcase and getting on with my life. Since pursuing punishment in this case cannot be justified economically, my doing it would have to be motivated by lust for revenge or for righteous punishment. If I were prone to this lust and you could sense it, you would be deterred from stealing from me. The ironic benefit, of course, is that the deterrent effect of my penchant for revenge would mean I would rarely need to act on it. And this is an economic benefit of my uneconomic behavior. By deterring violations of my property rights, I spare myself the need either to punish a thief or buy a new briefcase. Again, to put this in North’s terms of transactions costs, a society with less stealing is a society of reduced transactions costs and correspondingly greater market efficiency. Of course, this economic benefit accrues only because of my (and others’) penchant for a certain form of economically irrational behavior.
Frank emphasizes the economically irrational element in always pursuing honesty and punishment (and certain other moral principles). In certain situations, being honest and exacting punishment require sacrificing material gains. It may be that these sacrifices are made up for in the long run—Frank argues that this might generally be the case and that its being the case has led to the genetic evolution of certain moral emotions—but there is no guarantee of this, and it will almost certainly not hold true for all agents.
More importantly, if long-term material rewards are to accrue, the commitment to foregoing material rewards in certain short-term situations must be genuine. There can be no second guessing, when these situations arise, whether to follow through on one’s commitment to being honest or pursuing punishment. For, if one considers the material rewards in these situations, they will impel one to be dishonest or forego punishment. And this will mean that one’s commitment is fake. But fake commitments will not produce the looked-for long term benefits. People will not trust you if they think you are only honest as long as you can’t benefit from dishonesty or fear you if they think you only seek revenge when it is not economically costly. The strategy requires that people believe that your commitment to honesty and punishment is genuine. The only way they are likely to believe that is if your commitment to honesty and punishment is genuine, as evidenced by emotional and behavioral signals that are very hard to mimic.
Thus, ironically, the strategy for securing long term material benefits requires that you genuinely not care about those benefits as opposed to certain moral values. So, certain genuine moral commitments, distinct from the egoistic material reward seeking of the free market, can be justified ultimately in terms of egoistic material rewards. Frank seems to have shown that the free market itself rewards and thus justifies certain nonegoistic moral commitments.
Frank’s derivation of moral commitments from the egoistic values of the free market is ingenious. He succeeds in providing a reason why an egoistic utility maximizer should want to make nonegoistic moral commitments and an explanation of the role of these commitments in the operation of the free market. And this is what we asked for. Furthermore, his solution seems potentially comprehensive in that every way in which transactions costs can be reduced through moral commitments might be covered by his strategy—though this has not been shown.
If there is any reason to be unhappy with Frank’s approach, it is that it is reductive in what seems to be the wrong direction: moral values of honesty, respect for property, and so forth, are reduced to material values of health and wealth, not vice versa. This comes out in several ways. For instance, as Frank acknowledges, an alternative to the moral commitment strategy is to become good at mimicking moral commitment and exploiting the opportunities for safe and profitable wrongdoing that one thereby encounters. Certain aspects of human psychology might make the mimicking strategy difficult for most people to pull off, as Frank argues (1988, ch. 8), but probably there will be people with the needed talents, and in any event it is from the view of Frank’s theory a merely technical question. From this view, to be a good mimic able to rip people off effectively would be a good thing for the mimic. Moreover, as we have noted before, the mimicking strategy gets easier as the free market becomes more efficient. The more trustworthy, law-abiding, forthcoming, and amiable are the agents in the free market, the less point there is in going to the expense of background checks, credit checks, security guards, vaults, and so on. In the limit of perfect market efficiency, these disappear altogether. And as such safeguards decline, so does the cost of mimicry, to the point where agents practically invite rule violations. And on the view of Frank’s theory, rule violations in such a situation are the appropriate response for many agents. Inevitably such agents will emerge as the free market becomes more efficient, forcing people to be more guarded and market efficiency to correspondingly decline. This process fluctuates until an equilibrium position is reached in which none of the morally committed agents can do better by switching to the mimicry strategy and none of the mimicking agents can do better by switching to the moral commitment strategy. And in this equilibrium, none of the mimickers has any reason, on Frank’s premises, to change.
The problem is that although the moral commitment embraced by those who pursue the moral commitment strategy must be genuine, nevertheless ultimately the only values recognized on this view are those of material reward. The moral commitment strategy remains ultimately a sneaky way of maximizing material rewards in the long term. Therefore its normative force is contingent on its ability to actually do this. By the same token, this view finds no place for intrinsic, nonmaterial rewards. As McCloskey asks in her brief discussion of Frank, “What about human flourishing, beyond bread alone?” (2006, 414). A moral vision for a free society should explain the place of this as well.
- Frank, Robert H. 1988. Passions within Reason: The Strategic Role of the Emotions. Norton.
- McCloskey, Deirdre N. 2006. The Bourgeois Virtues: Ethics for an Age of Commerce. University of Chicago Press.