An economist—and perhaps most people—would treat the punishment a criminal justly suffers as the result of his wrongdoing as a bad thing for the criminal. But Plato argues (for example, in the Gorgias) that punishment is good for the criminal because it corrects his unjust ways and makes him a better person. And, assuming for the sake of argument that Plato is right about the effect of punishment, he has a point. But of course, so does the economist. Now, if both are right, it seems to follow that we have two different ways of calculating our good, the one invoked by the economist and the one invoked by Plato. Are there really two distinct ways of calculating our good, or is this a mirage? If there really are two, what distinguishes them and how is each justified?
The two ways might be reconciled if the criminal is merely short sighted and doesn’t realize that he can after all maximize his gains by undergoing punishment. Undergoing punishment would then be like taking medicine to become healthy. Taking medicine is locally a negative event, true enough, but it results in higher global rewards. In another metaphor, punishment is a local minimum that must be traversed to reach a global maximum—a trough one must pass through to reach a higher hill.
But this won’t do. The economist’s view of punishment as negative is not so easily set aside. The economist can easily explain the good of taking medicine: the individual compares the negative degree of the treatment (together with the probability of its effectiveness) with the negative degree of the ailment (together with its probable future course without treatment) and chooses the less negative of the two expected futures. Assuming the medicine would work and is not worse than the ailment, then, taking the medicine is good. But this only works because the ailment is evaluated negatively. And the trouble is that it is hardly clear that the criminal regards his own “ailment”—dishonesty, injustice—as a negative. Or anyway, as sufficiently negative to counterbalance the profits of crime.
Injustice might be a global negative if it results in lost economic opportunities, if it is bad business. In that case, punishment would turn out to be good in economic terms if it shocks the criminal out of his unjust habits or proclivities and converts him to justice. Then punishment would be the trough the criminal passes through to reach the higher hill of justice and its greater profitability. In many cases, this might be correct. But surely not in all. It is naïve to think that justice is always the most profitable course of action, even in the long run. (And by the way, there is not always a long run.) There will always be opportunities to commit injustice with very little risk of detection or punishment, so that the most profitable course of action is to mimic a just person while taking advantage of these opportunities as they arise. An interesting result of game theory is that such opportunities will tend to proliferate as the number of just persons in a society increases. For, the greater the number of just agents, the less is the need for an apparatus of vigilance, wariness, contracts, lawyers, detectives, prosecution, and enforcement. So, since these things are not free, they will atrophy, thus enlarging the opportunities for injustice. Therefore, the more that just behavior prevails in a society, the more injustice is encouraged by utilitarian considerations; i.e., by economic rationality.
The paradigmatic illustration of the economic problem of justice is, of course, the prisoner’s dilemma. In a prisoner’s dilemma, it is good to cooperate if you are with another cooperator—but it is even better to defect. Notice that the paradox of “rational” decision making yielding suboptimal outcomes in the prisoner’s dilemma cannot be resolved by the agents taking a longer or more comprehensive view of their interests. These are specified in the decision table, and as long as the situation is a true prisoner’s dilemma, economic rationality dictates the suboptimal outcome. The only way to reach the mutually optimal outcome is for the agents both to ignore the values specified in the decision table and in effect to value cooperation for its own sake. This fact is sometimes expressed by statements like, “it is rational to be irrational in a prisoner’s dilemma.” This is just to say that the agents could achieve a higher value outcome by not caring about value (and caring about cooperation instead). But such statements are not strictly true. On the one hand, if the agents really care less about the values in the table than about cooperation, then they are not being irrational when they cooperate; they are satisfying their preferences. And such an agent should still remain satisfied even if he is defected on. On the other hand, if the agents’ “irrational” behavior is really rational only because of the higher value outcomes they achieve, then that implies that the values in the table are the most important thing after all. And in that case, cooperating really is irrational. For, if the second agent cooperates, the first agent does better by defecting. And if the second agent defects, the first still does better by defecting. So regardless of what the second agent does, the first gets a higher value outcome by defecting. There is simply no way around this conclusion as long as the decision table values are the ruling consideration.
Both the conventional economic agent who defects in the prisoner’s dilemma and the devoted cooperator could therefore be said to be rationally pursuing their preferences but merely to have different preferences. And we could say that the decision table in the prisoner’s dilemma does not accurately depict the devoted cooperator’s values. Perhaps the devoted cooperator is constitutionally unable to place much value on a good acquired through defection. For such a person, a prisoner’s dilemma decision table could not be constructed. He would be immune to the prisoner’s dilemma! Of course, he might also become the victim of defections. But in accordance with his scale of values, he would still be satisfied with his own course of action. Thus, the conventional economic agent and the devoted cooperator could be made equivalent as regards rationality. Each rationally pursues his values. It’s just that their values are not the same.
I want to resist this line of thought. I think there is a more comprehensive sense of “rational,” in which we can say that the devoted cooperator is more rational than the conventional economic agent in the prisoner’s dilemma, and in which we can agree with Plato that punishment is good for the criminal, at the same time as there is a more limited, economic sense of the term, in which defection is rational in the prisoner’s dilemma and punishment is bad for the criminal.
If the devoted cooperator is “really” rational, more so than the conventional economic agent, how is this so? It can only be because the devoted cooperator pursues his real interests and the economic agent does not. How can we say what these are? In Aristotelian fashion, we must appeal to the total, integrated good functioning of the organism, the human being. This should mean success in getting external rewards, as well as an absence of internal conflict, disruption, and discord. One should be comfortable and pain free in one’s own skin as well as efficacious in external functioning and successful in promoting one’s own existence in one’s environment. One should be well-adjusted both internally and externally.
Are our true interests in this sense better achieved by the devoted cooperator than by the economic agent? Not necessarily, if we restrict our attention to external rewards. True, the devoted cooperator will always outcompete the economic agent in a world where there are other devoted cooperators around and where these can be reliably identified. As long as cooperators can identify each other and exclude conventional economic agents (who will defect whenever possible), cooperators will achieve the higher gains. The trouble is that the conventional economic agents will learn to mimic cooperators and thereby exploit them. And, as argued above, the more cooperators predominate in society, the easier exploitation by the conventional economic agent becomes. Therefore, as far as economic rewards go, it will always be possible for at least some conventional economic agents to hold their own with devoted cooperators. Thus, although economist Robert Frank, in his brilliant Passions within Reason (W. W. Norton, 1988), argued that a disposition to devoted cooperation could evolve in a society by devoted cooperators’ ability to outcompete conventional economic agents, he did not argue that devoted cooperators could succeed to such an extent as to drive conventional economic agents entirely from the field. The predicted outcome is a draw: there will always be some equilibrium consisting of a certain percentage of devoted cooperators and a certain percentage of conventional economic agents.
On the other hand, when it comes to internal success—the personal, psychological, social, “organismic” or holistic well-being of the agent—the devoted cooperator would seem to have a clear advantage. It may be that the conventional economic agent can outcompete the devoted cooperator in the sphere of economic rewards through mimicry, but the internal cost of this strategy is likely to be high if it entails living as a “Talented Mr. Ripley” who constantly deceives others and is conscious of the pain he brings them, whose life is a frenetic balancing act between lies and the truth, who must be constantly vigilant against the intelligence and perceptiveness of others, who lives in constant fear of getting caught, who is socially isolated and never able to really reveal his true self to anyone, and so forth. These are genuine aspects of well-being, but they do not show up—not directly—in the accounting of material rewards.
Yet the accounting of material rewards is important on its own. It is the basis of economic science and as such has a considerable measure of predictive success. Nearly all business activity—of banks, shops, factories, you name it—is measured in its terms, which seems right. People engage in economic activity to make money, and firms compete in an economic environment in which their growth and indeed their survival is determined by material outcomes. Again, analyses like Frank’s focus exclusively on material rewards, and they are very valuable. It is important to be able to see the sense in which defection is the rational action in the prisoner’s dilemma and the sense in which punishment is bad for the punished. But these cannot be seen from the standpoint of full rationality, which takes account of internal as well as external rewards. From the standpoint of full rationality, defection in the prisoner’s dilemma is pathological and corrective punishment is beneficial.
The standpoint of exclusively material rewards is important because very often, rightly or wrongly, it is how we actually reason and function. This is why it is predictively so successful. And in many contexts this standpoint is not unreasonable. Consider that ultimately our shaping is by the evolutionary process of natural selection, and natural selection is driven entirely by material outcomes.
Some economists may say that their focus is not on material rewards exclusively, but on “utilities,” which include all forms of preference satisfaction, internal (psychological, etc.) as well as external (material). They may say this, but it isn’t true. Nearly all economic analyses are conducted in terms of money, for example. The fact is that it is material goods that are almost always the exclusive focus of economic analysis. This is just why some of the analyses of Gary Becker, for example, which invoke the utility we place on the welfare of spouses and children, are so extraordinary—because they are so rare. In addition, the internal rewards I am talking about are not a matter of utility or preference satisfaction, but of objective well-being or good functioning, regardless of whether it is recognized or valued by the agent.
It seems, then, that there are grounds for two conceptions of rationality, an economic conception that focuses exclusively on material outcomes, and a full conception that focuses on holistic well-being, including internal as well as external flourishing. Economic rationality may be the more natural of the two. It is certainly more common. It is thought to be hard-headed and no-nonsense. It is the conception according to which defection is rational in the prisoner’s dilemma and punishment is bad for the criminal. Full rationality is the comprehensive conception. It encompasses the material rewards of economic rationality and also the rewards of proper internal functioning. These latter are less easily specifiable or measurable, but they are real and important nevertheless. It is full rationality that enables us to see why it is rational to be a devoted cooperator and why corrective punishment is good for the criminal. Full rationality takes as its standard our complete good, not just material well-being.
Now, a reason this matters for social theory: Libertarianism can be described as the political philosophy that assumes that economic rationality is all there is to rationality. But the above analysis indicates that it isn’t. Economic rationality falls short of full rationality. So the challenge for a post-libertarian political philosophy can be put this way: How to integrate the insights of economic rationality and the importance of individual liberty into a broader conception of the human good.
I went in and fixed the paragraph spacing in your post–WordPress was doubling the spaces between paragraphs.
Thanks, Irfan. I hadn’t noticed the problem. I’ll be on the lookout for it in the future.
David, you might like to look at my late-80’s “Human Rights as Game Strategies”
I’d say that the theory of strategic games is a more general framework of interaction than is analysis restricted to pure economic rationality, as you have reasonably corralled the latter.
Changing the payoff rankings of either player in a Prisoner’s Dilemma situation, changes the game to some other 2×2 game, which will have its own strategic rationality. Single-shot instances of PD or Chicken or other 2×2 games (2 players, 2 alternatives) are part of human interactions, but the iterated versions of these are also part of human interactions, and the latter for PD is not so gloomy for cooperation as the single-shot occasion (even with no change in the two players’ payoff ranking over the iterated episode, which keeps them in only the PD game).
I’d say the theory of strategic games is general enough to fold in any changes in payoff rankings of the players due to increasing valuation of cooperation or due to Nozick’s ‘symbolic utility’ in THE NATURE OF RATIONALITY. It’s a long time since I studied that book, but I’m sure much of it is pertinent to your reflection here.
More on iterated Prisoners Dilemma here:
The Evolution of Cooperation
Development to 2010
Also, “Evolutionary Instability of Zero-Determinant Strategies Demonstrates that Winning Is Not Everything” (2013)
LikeLiked by 1 person
Hi Stephen. Thanks for your comment, and thanks for the references. (Several of them I’ve read, especially the Nozick, Axelrod, and Skyrms.) I enjoyed your review of Sugden’s and Hardin’s books. But I must say, I have never been able to see that expanding the prisoner’s dilemma situation to the iterated, or evolutionary, case helps with the fundamental problem. Rather, it seems to me that what it does is change the game. It is essential to the prisoner’s dilemma that the values in the decision table define the game. When players can punish or reward each other in subsequent iterations, that’s no longer the case.
To point out that a strategy like “tit-for-tat” enables a pair of prisoner’s dilemma players, say, to outperform other pairs in a series of repeated plays is like pointing out that department stores with a “the customer is always right” policy can outcompete other stores through gaining a superior reputation and repeat business. True enough, but this isn’t the sort of case we are worried about. How important this point is may depend on how common true prisoner’s dilemma-type situations are in economic and social life. My own view is that they are very common. I think we don’t notice them much because people so seldom exploit them. And people seldom exploit them because we have evolved a proclivity, reinforced by socialization, to be cooperative. What I would like to do is work out the basics of an ethic that incorporates this insight about human nature.
By the way, David Sloan Wilson—the biologist guru of group selection—argues that the iterated prisoner’s dilemma is just another example of group selection! For, the player pairs are little groups competing against each other over time. Pairs that cooperate outcompete pairs that don’t and so expand in the population. This might help explain the evolution of the human disposition to cooperate.
LikeLiked by 1 person
David, I think it is confusing to describe the iterated PD as a different game than the single-shot case of PD. The individual episodes of PD in the iterated setting do not have any changes in the payoff rankings of the two players. They remain the pair of outcome-rankings defining PD. The situation in which they are playing is different than the situation of the single-shot case, in that each has a history of choices known to each of past choices with each other in the string (and still no communication allowed, only choices). But the rankings of outcomes for each of the two players remains constant throughout in the iterations. No change of rankings is required for the emergence of cooperation (when there is such an emergence) in the iterated case. I concur about the presence of PD single-shot (and at other times PD iterated) in real-life interactions. (I’m a fan of Skyrms also, though his books on my shelves have been little opened so far.)
LikeLiked by 1 person
Yes, perhaps what I should have said is that the iterated prisoner’s dilemma and the one-shot prisoner’s dilemma are appropriate for modeling quite different types of life situations. In saying that the iterated and one-shot games are different, I only meant that whereas in the one-shot game the values in the table are all there is to the game, in the iterated game there’s more to it (because the players expect to encounter each other again). I don’t think there’s anything we disagree about here.
LikeLiked by 1 person
That’s a masterful post, a sort of paradigm of The Philosophical Blog Post–right length, right level of technicality, has a beginning middle and an end, makes a philosophically substantive point, and ends by applying that point to a different but related topic. I also agree with the substantive point being made. (Also: the post doesn’t defend Christopher Columbus, ha ha.)
Since I agree with 95% of the post, I’m just nibbling at the edges here. The one thing I’m not sure I agree with is the application to libertarianism in the very last paragraph. You say:
There’s an ambiguity in “Libertarianism can be described as the political philosophy that assumes that economic rationality is all there is to rationality.” Seems to me you could mean one or the other of the following:
I’m not sure which one you mean. I’m also not sure whether either one is right. Something weaker than (2) is certainly right: lots of libertarians hold the view you’re criticizing. But I don’t see why the view you’re criticizing is intrinsic to libertarianism, and am not sure it’s as ubiquitous a view among libertarians as you say.
Stephen mentions Nozick. I haven’t read Nozick on symbolic utility for awhile either, but I just happened to re-read (teach) the first three chapters of Anarchy, State, and Utopia. The evidence of Nozick’s commitment to the “economic rationality” thesis there is frustratingly mixed. Evidence against: he doesn’t explicitly make the assumption that economic rationality = rationality as such. Evidence in favor: throughout the book, he seems to privilege economic explanations of rational behavior over others. For instance, for Nozick, getting out of the State of Nature is almost entirely a matter of making rational economic calculations to get out (chapter 2). We know people would leave a Lockean State of Nature because it’s economically rational to leave. Evidence against: Then he gets to talking about deontological “side constraints”: “Isn’t it irrational to accept a side constraint….?” (p. 30) The answer is supposed to be “no.” So it’s somehow rational to accept a deontological constraint, but moral side constraints reflect “the fact of our separate existences” (p. 33), not economic rationality.
My point is, Nozick is the paradigmatic case of a libertarian, and if the evidence is mixed in his case, even claim (2) above seems too strong. So couldn’t what you describe as a “challenge for post-libertarianism” be a challenge perfectly consistent with libertarianism?
LikeLiked by 2 people
Thanks for a stimulating comment, Irfan. I don’t think I adequately considered just what I meant in saying that libertarianism assumes that economic rationality is rationality simpliciter. Looking at your two options, I would choose the first as best representing what I had in mind. That is, I think I meant to suggest that there is an intrinsic relation between libertarianism and the idea that economic rationality is rationality simpliciter. I was taking it that this is more or less definitive of libertarianism.
The main force of the claim that economic rationality is rationality simpliciter —as I was thinking about it—is that no further morality is needed beyond economic rationality. That is, the individual pursuit of economic rationality is fully sufficient to produce a healthy, happy, prosperous, flourishing social order. Nothing further is needed to produce happy people in a good and just society, and probably nothing further is possible.
Isn’t this arguably the libertarian conception? Milton Friedman famously (notoriously?) says that “there is one and only one social responsibility of business—to … increase its profits…” Your pal Jason Brennan is proud of his new book (with Peter Jaworski), Markets without Limits: Moral Virtues and Commercial Interests, which goes through a series of controversial activities (such as selling one’s organs, allowing rich people—only those who can afford it—to use DNA technology to produce smarter, healthier, more beautiful children than other people’s children, etc.) and argues that not only should they be legal, they are morally unobjectionable.
These are characteristic libertarian positions because the core of libertarianism is the idea that individuals pursuing their rational self-interest (i.e., economic rationality) in a free market will always produce the optimal possible outcome, in any area of concern, in terms of people’s desire satisfaction. So libertarians spend most of their analytical efforts trying to show (1) that markets never fail, even in cases where most people think they do (roads, education, arts and science funding, environmental pollution, etc.) and (2) that when markets do apparently fail, the failure is actually due to some nonmarket interference, especially by government. And their leading—really, their sole—policy recommendation is to get government out of the way so that society and people’s happiness will be optimized.
In this spirit David Gauthier in Morals by Agreement argued that a pure free market, where it exists, is a “morally free zone,” meaning that within a pure free market, individuals rightly pursue their selfish interests without concern for the interests of others and without moral constraints of any kind. And it’s not a trick; he really means it. In a pure free market, “morality as a constraint on the individual pursuit of utility” (93) simply would not exist. Thus, Gauthier would be a libertarian if he thought a pure (or mostly pure) free market could be brought about. He is not a libertarian only because he happens to believe, as a matter of empirical fact, that externalities (and thus market failures) are ubiquitous.
Ayn Rand talks similarly. I mean, obviously she doesn’t say there’s no morality in a free market. Quite the opposite, her whole program is to show that the selfish pursuit of personal interest in a free market is profoundly moral. But this is a contest over the mantle of morality, not over how people should behave. It seems to me that her morality boils down to one of enlightened self-interest, and furthermore one in which virtue is construed purely instrumentally (not constitutively) and the good is construed ultimately in biological terms of life promotion (not subjective utility maximization). This looks to me like the view that economic rationality is rationality enough. I would take her doctrine that “there are no conflicts of interest among rational men” as further support for this interpretation.
But now there seems to be a glaring problem for this view of libertarianism raised by your comment, which is that Nozick’s “side constraints” are precisely constraints on the pursuit of self-interest. More generally, the observance of individual rights—for example, not stealing even when you can be very sure of getting away with it—is just exactly the sort of thing I characterized as violating economic rationality. Now, libertarians certainly hold that one is morally obligated to observe individual rights. So if my own argument is correct, then libertarianism cannot be the view that economic rationality is rationality simpliciter. Therefore, my claim about libertarianism can’t be right. I need to modify it somehow.
(Note on the claim that libertarians hold that observance of individual rights is a matter of moral obligation: There is some confusion about this, actually. I am thinking particularly of libertarian anarchists such as Roy Childs, Murray Rothbard, and David Friedman. Their arguments for anarcho-capitalism are to the effect that economic rationality is sufficient to ensure the observance of individual rights. That is precisely the force of the claim that market forces alone (i.e., without government) can protect individual rights. In support of this claim, incidentally, Roy Childs made very effective use of Rand’s claim that there are no conflicts of interest among rational men. This principle does seem to imply that observance of individual rights is merely a matter of economic rationality. If so, then libertarianism says economic rationality is rationality simpliciter after all. And of course the libertarian anarchists regard themselves as the only fully consistent libertarians.)
There is also a second problem you might have had in mind, which is that a libertarian political philosophy hardly precludes a person from holding stronger views about the human good than the merely external values (health, wealth, etc.) of economic rationality. For example, a person can be a libertarian and still hold a Neo-Aristotelian conception of full rationality of the sort I outlined. I agree that there’s nothing positively inconsistent in this. Therefore, perhaps it’s too strong to say that libertarianism intrinsically holds that economic rationality is rationality simpliciter. Nevertheless, what I’m angling for is a political philosophy that does not merely allow bur requires a richer conception of the human good than that of economic rationality, one that identifies human well-being with virtues such as cooperativeness and loyalty, for example, as well as external goods such as health and wealth. What form such a political philosophy should take is what I’d like to figure out. Perhaps it would after all be just libertarianism with a richer conception of the human good added in. This seems to be what Deirdre McCloskey advocates in The Bourgeois Virtues (U. of Chicago Press, 2006), for example. But maybe not.
LikeLiked by 1 person
I’m sympathetic to much of what you’ve said, including your last response to Stephen (about iterated PDs, etc.) Now that I understand you better, I think you’re angling for a qualified version of my thesis (1): i.e., there’s a distinctive species of libertarianism such that (1) is true of it, and this species is arguably one of the paradigmatic forms of libertarianism, but not the only one. The defining normative commitment of libertarianism is some version of the non-initiation of force principle (whatever name one gives to it, whatever interpretation one gives it, and however one justifies it). Commitment to that principle doesn’t necessarily presuppose an economic conception of rationality, but has often gotten one.
I think the economic conception of rationality is certainly central to Gauthier, to the Friedmans, as well as to my pal Jason Brennan (and his pals). It’s probably true of Rothbard and Childs, though I don’t know their work well enough to say that with confidence. I don’t think it’s clearly true of Ayn Rand or Nozick, however. I’ve already mentioned Nozick, but I think Rand’s view on this is just as equivocal as Nozick’s, and doesn’t follow either from her conception of life as the standard of value, or the non-conflicts of man’s interests.
I don’t have the text with me right now, but in “The Objectivist Ethics,” when Rand is describing the cardinal values and virtues, she describes them (I’m paraphrasing) as the cardinal values and virtues which together are “the means to and realization of” the agent’s life. (I’m reasonably sure of the accuracy of the quoted phrase, but will check it tonight when I get home.) That argues in favor of a “constitutive” reading. What’s unclear is whether or not she means “realization of” in a reductive way (as she could). In other words, she might be saying: X, Y, and Z are the means to and realization of the ultimate value, life; that entails that X, Y, and Z constitute a well-lived life, but they “constitute” life in virtue of the instrumental contribution they make to the realization of life. In other words, X, Y, and Z are not intrinsically valuable constituents; they’re fixed, but instrumentally indispensable constituents of a flourishing life qua human. That may be consistent with what you’re saying about her, but she herself doesn’t address the issue, so it’s not clear what she wants to say about it.
There’s a discussion in Introduction to Objectivist Epistemology that’s relevant here–the discussion of “teleological measurement.” There she says that practical rationality is a matter of ordinal rather than cardinal measurement (which seems to cut against your interpretation), but the example she uses to illustrate the point is straightforwardly economic (which supports your interpretation). I think the example is optimizing a budget, but I don’t remember offhand, so I’ll have to look it up. But she ends that chapter with a polemical discussion of appropriate and inappropriate standards of measurement. The example she gives is the measurement of love, and the point she makes is that though measurement is a numerical relationship, the appropriate standards and methods of measurement are not always…easily apparent….nor is the degree of achievable precision as great as…” in the hard sciences (p. 39). That seems to clear the way for what you’re doing and remove a motivation for reductionism. As in a lot of cases, I think Rand’s view is less determinate than either her champions or critics take it to be.
You’re right to say that I had your “second problem” in mind. I was thinking, in particular, of people like Roderick Long, Neera Badhwar, Mark LeBar, and Doug Rasmussen and Doug Den Uyl, all of whom are in some sense Aristotelians but also in some sense libertarians. Roderick’s view is probably the clearest, just because it’s so explicitly Aristotelian and explicitly libertarian. I found this an instructively clarifying discussion.
I don’t consider myself a libertarian or (any longer) an Objectivist, so I think it’s legitimate to leave the issue as indeterminate at the end as you do: “Nevertheless, what I’m angling for is a political philosophy that does not merely allow but requires a richer conception of the human good than that of economic rationality….” It may be that some form of libertarianism, or something like libertarianism, flows from that richer conception, but it isn’t obvious, and I don’t think anyone has adequately made the case one way or another. (I haven’t read McCloskey’s book.) We don’t really know. The work is still waiting to get done, but there’s no reason to put rigidly ideological constraints on the form it must take–something like a non-initiation of force principle is perhaps true, but not necessarily the dogmatic version of it that usually gets served up in libertarian rhetoric. The main point is that the richer conception is the right starting point.
On Ayn Rand, the main thing I recall is a strong statement to the effect that values are what we seek to achieve, virtues are our means of achieving values. (I think this is in “The Objectivist Ethics.”) That sounds to me like an endorsement of an instrumental conception of virtue. Virtue is not its own reward; the reward of virtue is the values it brings you. But if you do check the text, I would be curious to know what you come up with.
No argument there!
I’ll have a look at Long’s blog post.
Sorry, I reproduced the wrong link in my previous comment. This is the right Long link to use.
So I looked up the passages. Here’s the relevant one from “The Objectivist Ethics.” One and the same passage includes both what you remember and what I was referring to.
I don’t think that yields an instrumentalist reading. She describes the cardinal values as “the…realization of one’s ultimate value” (my emphasis) which I take to imply that the values constitute the ultimate value, and is compatible with saying that the virtues constitute their corresponding values. In constituting their corresponding values, virtues also promote values; likewise the cardinal values’ relation to the ultimate value. Big mouthful: Both cardinal values and virtues simultaneously constitute and are instrumental to a value that is partly constituted by and partly independent of the items that promote it. (I once heard Terence Irwin give an identical interpretation of Aristotle on moral virtue.)
So when she says that virtue is the act by which one gains value, saying that is compatible with saying that virtuous acts partly constitute human flourishing. Whatever that all amounts to, it’s not a straightforwardly instrumentalist reading, and certainly not a maximizing reading. I don’t think there’s any clear-cut way to get from what she’s saying to an economic conception of rationality.
The full IOE passage is too long to quote here, but it’s on pp. 33-34. Here’s part of it:
The example used to illustrate teleological measurement might lead you to think that teleological measurement is indistinguishable from an economic conception of rationality, but the very next paragraph, on love (bottom of p. 33 to top of p. 34), seems to be denying that. Ultimately, however, Rand punts on explaining what is going on in the non-economic cases. So what we end up with is a claim that reads something like this:
Again, no matter what you do with all that, I don’t think it yields an economic conception of rationality in the way that, say, a reading of the Friedmans does.
I agree that you can read her this way. I’m a little doubtful whether that’s the most natural reading, though. When she talks of realizing one’s ultimate value, one’s life, she speaks of the three values which do this. She does add, after she has enumerated them, “with their three corresponding virtues,” but she has just distinguished virtues as the means by which one gains and/or keeps values.
Here is an additional, juicy quote to contemplate, from Galt’s speech:
This is also subject to interpretation, obviously. But it doesn’t sound very eudaimonist on its face.
It has always seemed to me that one of the best indications that she has a rich conception of the human good (not just external goods) is that she takes the standard of value to be “the life of man qua man.” The qualification “qua man” seems significant (though she never really spells out what it means). It seems to say that a mere animal or material existence won’t do.
Concerning Roderick Long, thanks for stimulating me to finally read some of his writing. I may actually have the Objectivist Studies volume you cite in a storage box. But whether or no, I’ve certainly never read it. I spent a good part of yesterday reading him, especially the series of posts on eudaimonism at BHL. He is much more libertarian than I am (a Rothbardian! I had no idea there were any left), but his general project of basing libertarian political philosophy on Aristotelian ethical foundations is obviously similar to what I’m trying to do. Also his clear and straightforward conception of Aristotelian eudaimonism is refreshing and illuminating to read.
One striking doctrine of his is that individual rights (and the virtue of adhering to individual rights) are an immediate constitutive element of human flourishing. So unlike me (and apparently also Douglas Rasmussen and Douglas Den Uyl), who think rights are a deontological principle distinct from human flourishing (as I argued in a recent post), he thinks they’re a part of the package. In this connection, he writes:
I also found his discussion of “the unity of virtue” informative and persuasive. You alluded to this idea in an earlier thread. What Long has to say about it has warmed me up to the idea.
On virtues as means, I wonder if the following passage helps at all. It’s Terence Irwin’s note on prohairesis in the Glossary of his translation of the Nicomachean Ethics (p. 322, sv “decision, prohairesis,” under items 2-4):
I think it’s compatible with the passage I quoted to say that virtue is a “means” in that complicated sense.
I don’t think it’s objectionable to deny that virtue is its own reward. In one sense it is, in another sense it isn’t. Obviously, if virtue is constitutive of flourishing, then the practice of virtue is its own reward in a straightforward way that I won’t belabor.
But in another sense, under favorable circumstances, virtue gives rise to natural pleasures that involve rewards that go beyond virtue. Put it this way: Virtue is possible to moral agents both in San Francisco and Gaza City, but it’s more pleasurably experienced in San Francisco than in Gaza (or so I hear). Given his circumstances, the virtuous Gazan can at best experience a sort of moral contentment that falls short of intense happiness. Gaza is just too miserable a place for real joy. By contrast, the virtuous Californian experiences something a lot more pleasurable. The pleasure of virtue-in-SF is the reward of virtue that in some sense goes beyond virtue. It’s the part of virtue that is not sufficiently strongly connected to virtue to be instantiated wherever virtue is to be found, but is correlated-with-virtue-in-favorable-circumstances.
Contra Kant’s worry about eudaimonism, virtue isn’t contingent on or hostage to getting this reward, as though you could be excused from the requirements of virtue if the potential of getting the California-like psychological reward was iffy. But if you’re lucky enough to be in sufficiently favorable conditions, virtue does give rise to this more psychological sort of happiness or well-being. So to that extent, I think her formulation is right, though it’s not clear that she meant by it exactly (or even approximately) what I just said.
Incidentally, Rand explicitly says that “survival qua man” does not mean a momentary or merely physical survival (Virtue of Selfishness, p. 26).
Digression: I perennially wonder whether people like me have come to over-theorize Rand (finding stuff in her that’s not really there)–or come to discover brilliantly compressed insights in her writings that were sitting there the whole time, waiting for discovery by philosophically informed interpreters. It’s not entirely clear to me which of the two I’m doing above.
If you manage to read Roderick’s Reason and Value, you’re in for a treat. The whole thing is online as a PDF, by the way. (For awhile it was going on Amazon for over $100.) Maybe we could read through it here at PoT and do a series of commentaries on it. I could ask Roderick if he’s interested in joining in.
The Irwin thing is very interesting, to me anyway, because it brings up a point I never thought of. Aristotle says, (1) that we do not deliberate about the end, only the means, (2) that choice is the outcome of deliberation, (3) that virtuous actions are chosen, and (4) that virtuous actions are chosen for their own sake, not (necessarily) for the sake of any further end. These propositions are seemingly incompatible. Irwin has a neat solution, namely that means need not be only instrumental. If an end should be an essential component of some larger end, the first end could be viewed as a means to the larger end. This doesn’t seem unreasonable in itself, and it strikes me a plausible also that it could be Aristotle’s doctrine. Whether it’s what Rand had in mind seems to me more doubtful.
On the matter of Rand, it occurred to me that another piece to look at might be her essay titled “Causality versus Duty,” which as I recall takes a rather instrumentalist view of our proper motivation for moral action.
I would be up for this. I did get the PDF, which I found because Long himself in an Amazon reader “review” provides the URL for it. I still haven’t looked at it, but if you think a discussion of it would be worthwhile, let’s do it.
I think a discussion of Roderick’s monograph would definitely be worthwhile, but I can’t start it until well into December–in other words, after final grades are in. If only Aristotle had said, “We do not deliberate about grades; we just randomly assign them and get on with it.”
On Aristotle: he doesn’t quite say that we deliberate “only about means.” He says we deliberate about what’s pros ta tele, roughly translated, “what conduces to ends.” The vagueness of the formulation is what allows for Irwin’s approach. But the approach is itself highly contested, at least as an interpretation of Aristotle. Here’s a Google books link to Jonathan Lear’s Desire to Understand, pp. 146ff, which takes a totally different approach. But I think Irwin’s claim is plausible, just as a matter of fact rather than interpretation. It’s hard to know whether Rand had it in mind or not. She read the Nicomachean Ethics, but I don’t know what she got out of it. Of course, reading the NE is not a necessary condition of coming up with the view.
Will take a look at “Causality Versus Duty,” which, alas, is not in my office, if only to hide my Ayn Rand book collection from the prying eyes of my colleagues (he said, as he published the comment on the Internet).
Pingback: BC’s weekend reads | Notes On Liberty
David, my memory of economics texts is that they hold there can be rationality only instrumentally. (Would that also be the view of Herbert Simon?) There cannot be rationality about ultimate values. Certainly Rand and quite a few libertarian philosophers, if not libertarian economists, would take issue with that view.
One detail, also, about David Friedman: As I recall, he is a utilitarian in his thought on libertarianism; he rejected talk of rights at least as really basic. I imagine he would welcome those late-80’s books by Hardin (liberal) and by Sugden (conservative) in which rights are conventions that can emerge through purely instrumental locally self-interested behavior. Hardin is/was a utilitarian. But I’m unsure how Friedman would respond to that (or any similar subsequent work), as I have not followed any of them since the ’80’s.
Yes, for theoretical purposes, economists all (or anyway, both Austrian and neoclassical economists) use a utility maximizing conception of rationality, which does not ask about the source or the nature of the utilities. And I would say there is a third conception of rationality, besides what I’m calling economic rationality and full rationality, which I would call instrumental rationality. Instrumental rationality does not evaluate or choose ends or make any particular assumptions about ends at all. It is concerned exclusively with means. This is an extremely old and extremely useful conception of rationality. It is the conception used in mathematical decision theory and in game theory, for example. It is also what Aristotle is implying, I think, when he says, as he does repeatedly, that we do not deliberate about ends but only about means.
I didn’t mean to deny any of this in speaking of “economic rationality” as being concerned with external or material ends. But besides the “pure theory of reasoning about means,” there are also conceptions of rationality that employ or include certain conceptions of what is good for us. For example, when we say punishment is bad for the criminal, this implies something about what is good or bad. True instrumental rationality cannot say that punishment is bad or that profit is good. But the vast majority of economic analyses of practical situations and problems do say these things. Or at least, they say that a conception of self-interest based on these sorts of material values will be predictively successful. An economic analysis can’t be predictively successful based on pure instrumental rationality alone. Economists must make assumptions about the values that are in play, and they nearly always assume that the values in question are material. For instance, money. (Not always always, of course, but usually.) Since these are the sorts of analyses that economists specialize in, it seemed reasonable to call this conception of rationality “economic.” Whereas instrumental rationality is a general, mathematical theory, not peculiar to economics.
David Friedman claims he developed his theory of anarcho-capitalism because fully consistent libertarianism is incompatible with a government monopoly on coercion, so it is necessary to show that we can do without such a government monopoly. I would say that anyone who is obsessed with “fully consistent libertarianism” is not a utilitarian! Not completely, anyway. However, I do not claim to be an expert on David Friedman. I recall reading some stuff he wrote on criminal justice where he talked about adjusting punishments — and even deciding what legal boundaries should be entailed by certain property rights — by calculating economic costs and benefits. This is a view I have since learned to associate with Judge Richard Posner. It is certainly a utilitarian approach to the law.
LikeLiked by 1 person
Apologies, David, for taking so long to get to this very interesting post. What follows is a series of connected points that constitute as much a competing picture (with a rationale for it) as a direct engagement with your view (or the standard literature, I’m afraid). I hope there are enough points of contact for this to be of some use.
(1) I understand PD scenarios to be simple, easy-to-study scenarios in which guiding or regulating one’s action so as to achieve important outcomes valued by an agent depend most directly, not on the agent’s agency, but on the coordination of the actions of more than one agent. Though a mythical sort of Hobbesian sovereign might achieve something like “collective agency” (willing that the relevant coordination of action across agents occur and this tending to occur) genuine agency – and rational guidance – is individual. In order to achieve these outcomes, we need to value being a cooperator. If we do, then, in using instrumental rationality to pursue our values, we provide some opportunity for others to do their part and for all of us to enjoy outcomes most effectively achieved via appropriate coordination of action across agents. On this picture, the difference between the non-cooperator (who is cannot effectively achieve the outcomes that are sensitive to action-coordination across agents) and the cooperator is a difference in values or preference. There is no need for a distinct sort of rationality (rational process, type of rational step) that is possessed in full only by the cooperator. What is the motivation or reason for resisting this picture?
(2) At least if there are enough cooperators (or potential cooperators) around, it is beneficial or valuable for one to be a cooperator rather than not. If, in such a situation, one is bonked on the head and transformed from a non-cooperator to a cooperator, one does well for oneself – a thing that is good for one has befallen one. Similarly, one would be “going from worse to better” in transitioning from a non-cooperator to a cooperator. But instrumental rational evaluation is as natural as instrumental outcome-wise or value-wise evaluation. It is intuitive to say that, if one starts out as a non-cooperator (if enough cooperators or potential cooperators are around), then one is rational in coming be a cooperator via the sort of not-explicitly-rational responding and adjusting to things that we tend to engage in (Nomy Arpaly would call this being responsive or sensitive to reasons – not sure that this is the most perspicuous way of describing what is going on, but this is a current way of speaking of the relevant phenomenon). It would be nice to have a good explanation of just why rational not simply outcomes-wise or value-wise instrumental evaluation is appropriate, here – presumably, it has something to do with agency adjusting to outcomes to be produced (so it is not just a matter of good things befalling) but not via explicit steps of reasoning. (I worry, David, that in bringing in objective interests of values, or correspondence to these, as an element of rationality, you have equated values that happen to be good with the broadly rational procedure that reliably tends to produce the good values and subsequent utility calculation and action.)
(3) It is entirely possible that non-cooperators are not fully rational (in not having cooperator-type values in their decision table) due to presenting having, or perhaps due to having had in the past, sufficient reason to adopt such values. Say such a non-cooperator defects in a PD scenario when there are plenty of cooperators around. This non-cooperator acts against her own interests. If she had access to this information and the capacity to bring this information to bear on whether to become a cooperator or not (change her values), quite plausibly she is irrational in *not being a cooperator* (not having the relevant value or values). But she is also quite plausibly irrational in defecting on this occasion. If we say this what we are doing, I think, is evaluating the rationality of her action in light of the rationality of the values that she does her “rational calculation” on to determine behavior to exhibit. This is another way of explaining the sense in which the defecting PD agent might be both rational and irrational. She might be (instrumentally) rational in her action relative to her values but (instrumentally) irrational in her action relative to the (instrumental) irrationality of her values.
I haven’t thought about these issues in a long, long time and a really appreciate this opportunity to hash through some of them again.
Hi Michael. Sorry in turn for taking so long to reply. I confess I played hooky the entire four-day weekend!
If I understand what you’re saying, your concerns seem to me to boil down to two major points.
First, you offer a conception of rationality designed to achieve optimal outcomes for the two players in prisoner’s dilemma situations. By “optimal” I mean the highest outcomes the players can mutually achieve, namely the outcomes achieved under cooperation. You think that a disposition to cooperate can be incorporated into an agent’s value structure, after which cooperation is a straightforward consequence of instrumental rationality. I agree about this, of course. It is the value structure I described as that of the “devoted cooperator” in paragraph six of my post. However, you also seem to think—in your point (2)—that adoption of devoted cooperator values is rational from the point of view of the defector, or what I called the conventional economic agent, at least as long as there are enough other devoted cooperators around to make cooperation a high payoff strategy.
But what about the points I raised about this in paragraph five of my post? The thing is, once one has adopted the values of the devoted cooperator, one no longer cares enough about the payoffs to want to be a cooperator anymore! Of course, at that point one is a cooperator, so doesn’t have to want to be one. But the point is, the values that make the conventional agent wish he were a cooperator no longer apply once he is a cooperator. The cooperator does not care about getting optimal payoffs. If he did, he couldn’t rationally be a cooperator. To put it another way, when the conventional agent wishes he could make himself a cooperator, what he is wishing is that he could make himself, from his present point of view, irrational. That is still a bit of a paradox: The only way to get the higher payoffs in prisoner’s dilemma situations is to not care (much) about them.
Also—this is sort of a technicality—note that it isn’t enough to make cooperation “rational” just that there be a sufficient number of other cooperators in the locale. The truth is, the more cooperators there are, from the conventional agent’s viewpoint, the more reason there is to defect! For, defecting on cooperators provides the best payoffs of all. Assuming that encounters are anonymous (i.e., no tracking agents between iterations), defection is always a dominant strategy, and there’s no getting around it. A society of agents who are all cooperators except for one lone defector will be richly exploited by that defector, and if “reproduction” of agents over time is in proportion to payoffs, defectors will soon overwhelm the population. What is needed is not just other cooperators, but cooperators who can be identified as such. In a society of agents who are all defectors except for a lone pair of cooperators who can reliably spot each other (and thereby restrict their dealings only to other cooperators), cooperators will quickly overwhelm the population. So what matters is not numbers of cooperators but their identifiability. I think your ideas are easily amended to incorporate this detail.
Second, you are concerned about incorporating values into rationality. I have the impression you want to keep rationality instrumental. I acknowledged the importance of instrumental rationality in a reply to a comment by guyau. I have no problem with instrumental rationality. Merely as the science or mathematics of calculating means to ends, it is important. But I think rationality simpliciter means more than just effective means/ends calculation. If we imagine a creature that must have a certain sort of food to survive but has preferences that lead it away from that food and toward food that won’t nourish it, I think we should say that that is an irrational situation. Certainly we should say that if we think the creature is capable of recognizing its own interests. Note that there doesn’t have to be anything incoherent or contradictory about such a creature’s preference structure. That is, it needn’t be subject to Dutch Books or other formal problems. But it is still irrational. It seems to me that any general definition of rationality ought to imply, one way or another, that someone who works against his own manifest interests is being irrational.
I realize that I’ve just used an intuition pump argument—and I’m not a fan of intuition pump arguments. But I’m not sure what else to say. People often use “rational” and “irrational” in practice as very broad terms of praise and criticism, designating simply what makes sense or does not make sense. This includes recognizing or failing to recognize one’s manifest real interests. Moreover, we need such a concept, I’d say: the concept of the reasoned pursuit of one’s good. For, this is central to a well-lived life. Since we need such a concept, and appear to have it in the way “rational” is often used in practice, it seems fair to use “rational” in the way I do in defining a concept of “full rationality.”