In “Social Morality and the Primacy of Individual Perspectives” (2017), Gerald Gaus responds to critics of his The Order of Public Reason (2011) as part of symposium on that book. I presume The Tyranny of the Ideal (2016) is a continuation of the ideas earlier and more formally developed in the 2011 book. The 2017 essay is valuable because it aims to “sketch a modest of recasting of the analysis” presented in the 2011 book. That is, more or less the whole argument of 2011 is restated in new terms, and obviously much abbreviated. The following is a brief summary of the argument and one of its implications.
Gaus distinguishes “the philosophical perspective” from “individual normative perspectives,” the former being detached and objective and theoretical, while the latter represent the moral convictions of individual agents living in their particular situations. It is normal in moral and political philosophy to assume that the philosophical perspective is the ruling perspective in terms of which to assess the perspectives of individuals and also the system of moral rules adopted in any community. That is, the philosophical perspective consists of an Archimedean point from which objective moral standards and rules can be identified, in light of which individual perspectives are to be judged for adequacy, appropriateness, etc. For example, in the philosophy of John Rawls, there is a theoretical perspective from which the justice of particular societies is to be judged and which dictates the standards which individuals ought to adopt in making their own moral judgments. Again, a libertarian scheme like Nozick’s presents a theoretical model for a just social order and which is to guide judgments of individuals.
But Gaus thinks this is all wrong. Exactly why is not clear to me. Perhaps he thinks the prospect of justifying any moral theory objectively is hopeless. But that’s not what he says (that I’ve seen or remember). Rather, he mainly appeals to the need for an “open society” or “free society”—terms he seems to use interchangeably—to “take moral diversity seriously” (19 [all page references are to Gaus 2017]). “The goal of the book is to show how a diversity of moral views can lead to a cooperative social morality while abjuring as far as possible ‘external’ moral claims—claims that do not derive from the perspectives of cooperating individuals. The diverse individual moral perspectives, and what they understand as normative, must be the real engines of social normativity” (1). The idea might be that to require the individuals in a society to endorse any given moral code would be coercive and thus incompatible with an open society; thus, the philosophical perspective as arbiter of social morality won’t do. Alternatively, the idea might be that in the end there really is no deep theoretical justification of moral rules. Instead, moral rules have normative force when they are accepted as such; i.e., when many or most members of a community follow them and believe the other members of the community expect everyone to follow them and accept the legitimacy of punishing noncompliance (on this see also “The Open Society as a Rule-Based Order,” 2016a). “The ‘normativity’ that exists in a system of social morality comes from the normative commitments of the participants” (2). Thus, people do not accept the moral rules because they are theoretically justified; rather, the rules are accepted first, and only then can we ask whether they are philosophically justified (3). Note that the philosophical perspective is not completely rejected in Gaus’s approach. But its role is reduced. Instead of the philosophical perspective being the origin of a moral code, the moral code evolves from the antecedent individual perspectives of the community members. Only then does the philosophical perspective come into play, where its role is to reflect on and assess the acceptability of the evolved moral code. Gaus says (3) this philosophical assessment is a point on which he departs somewhat from Hayek, who doubted the capacity of reason to grasp and assess an evolved moral code.
But how can “diverse individual moral perspectives” be reconciled to form a single, cooperative social order? The mechanism Gaus proposes is that people who value a cooperative social order will value the shared moral rules that it requires. And he uses a quasi-formal model in which the “moral utility” any individual assigns to a potential moral rule is the arithmetical product of that individual’s “inherent evaluative utility” for the rule and a weight assigned to the rule. The weighting function ranges from 0 to 1 and is greater the more people who adopt the rule. Thus, a rule that nobody else follows will have zero total moral utility no matter how great its inherent utility for the individual, and a rule that everybody else follows can have greater total moral utility for an individual than another rule the individual inherently values more but which few other people follow.
Given that people value, not just moral rules, but shared moral rules, we can see how people with diverse moral commitments can converge on a set of shared social rules. For example, in a scenario in which a rival pair of rules are contending for adoption, the rule with majority support initially can come to be (nearly) universally adopted over the rule with initial minority support, even if that minority is passionately devoted to it, as the “shared support premium” for the majority rule snowballs over time. Whenever different rules compete for dominance in a community, the outcome will depend on the number of supporters of each rule, how strongly those supporters prefer their rule over other rules, and how much they value having shared social rules. Thus, it would be possible for a rule with initially only minority support to eventually win universal adoption if its supporters tend to be passionate about the rule and to care less about having shared rules, while the supporters of the initial majority rule care less about that rule and place more emphasis on having shared rules. Also, the evolution of shared social rules will be path dependent: chance events that give one rule an initial lead over its rivals can generate a cascade of support that leads to a universal acceptance for that rule that was by no means foreordained. There are also scenarios in which no rule achieves hegemony; for example, if enough people’s preference for one rule over another is too great to be overcome by the value of having shared rules or if not enough people value rule sharing.
In sum, on Gaus’s model, the code of social rules for a community is the outcome of an evolutionary process grounded in the diverse moral attitudes, preferences, and commitments of the individual community members. The code of social rules arises through a spontaneous, self-organizing process as individual decision-makers gradually conform themselves—or not—to what they perceive to be the moral expectations of others.
There are several points that Gaus emphasizes about this process. First, it is a process of moral decision-making. It is not primarily a matter of arranging or constructing “institutions” that people see as promoting their interests by channeling the self-interests of agents in the community (à la Douglass North, for example). Some of this may go on, of course. But the evolution of social moral rules is driven by moral reasoning, on Gaus’s view. Yet, second, it is not driven by deep theory. The social moral code is not a product constructed from the philosophical perspective.
The critical idea [is] that the normative basis of our shared morality is to be grounded not in the “Archimedean perspective” [citation of Gauthier and a paper by Gaus & Thrasher] that reveals correct principles of morality independent of the perspectives of those in a practice of social morality. The Order of Public Reason seeks to avoid appeal to any such transcendental source of moral claims and demands. The “normativity” that exists in a system of social morality comes from the normative commitments of the participants. (2)
Finally, third and most strikingly, it does not depend on people necessarily changing their moral attitudes or preferences or commitments very much. On Gaus’s model, people may adopt—at least in the sense of conforming their behavior to—social rules often because they believe others have adopted them, not because they inherently value or agree with them. People who have even fairly strong ideas about property rights or social entitlements or fair judicial procedures may nonetheless conform to alternative social rules concerning these things if they believe that these are the rules of their community, due to the value of having shared rules. This is how an open society accommodates moral diversity.
An objection to Gaus’s view may be that it places too great weight on the value to people of shared moral rules. Why should people care so much about having shared rules that they would sacrifice their own in order to conform to the group? Gaus has, it seems to me, two lines of reply to this objection. First, which is highlighted in this paper, is that shared rules are necessary for moral accountability. That is, to hold someone accountable for a rule violation, they must accept the legitimacy of the rule. Gaus cites research that shows that people do not accept punishment for a rule violation if they do not accept the legitimacy of the rule (for example, Hopfensitz and Reuben 2009 and Fehr and Gächter 2000). If people do not accept a given rule (or regard it as legitimate, whether or not they agree with it), then they interpret “punishment” for its violation as aggression and respond accordingly. The result is that punishment has the opposite effect to what is intended. This is in addition to the merely logical point that you can’t expect to hold people “accountable” to rules they do not acknowledge. Therefore, a cooperative social order depends on shared rules. Second, which is only touched upon in this paper (see §5), whether rational or not, people in fact care very much about others’ expectations for rule-following. Human beings take for given that social situations are rule-governed. They by nature seek to discern the rules that govern their social environment, and they are good at discerning them. They are heavily influenced both by (i) what rules they perceive others to follow and (ii) what rules they believe others believe they should follow. Empirically, these perceptions typically have as much or more to do with people’s actions than their own personality, values, and preferences do. Gaus himself cites only a couple of studies in support of this, but in fact a great deal of evidence to this effect has accumulated over the past two decades. There is good reason to think that a propensity to rule-conformity is a genetic evolutionary development in humans (see for example Chudek and Henrich 2011). Thus, caring about shared rules does not depend on explicit reasoning about the value of moral accountability.
An important implication of Gaus’s view is that individual character and moral rectitude—having virtuous citizens—may be less important for the maintenance of an open society than Aristotelian philosophy and common sense might suppose. This is not to say that they are completely irrelevant, obviously. But if it is true that people tend to conform themselves to others’ rule-following expectations regardless of their own proclivities, then the key to a flourishing society will be having a set of good social rules more than having a set of virtuous people. The success of one’s own life might depend importantly on having Aristotelian virtue. But there is less reason from the societal standpoint to care about that than about people following the social rules, which may depend more on sheer social conformity than on good character.
Works Cited
Maciej Chudek and Joseph Henrich. 2011. “Culture-Gene Coevolution, Norm-Psychology, and the Emergence of Human Prosociality.” Trends in Cognitive Sciences, 15: 218–226.
Ernst Fehr and Simon Gächter. 2000. “Cooperation and Punishment in Public Goods Experiments.” American Economic Review, 90: 980–994.
Gerald Gaus. 2011. The Order of Public Reason. Cambridge University Press.
———. 2016a. “The Open Society as a Rule-Based Order.” Erasmus Journal for Philosophy and Economics, 9: 1–16.
———. 2016. The Tyranny of the Ideal. Princeton University Press.
———. 2017. “Social Morality and the Primacy of Individual Perspectives.” The Review of Austrian Economics, 30: 377–396.
Astrid Hopfensitz and Ernesto Reuben. 2011. “The Importance of the Emotions for the Effectiveness of Social Punishment.” The Economic Journal, 119: 1534–1559.
Douglas C. North. 1990. Institutions, Institutional Change, and Economic Performance. Cambridge University Press.
So how does this apply to a real-world case — say, a society in which a large portion of the population (women, slaves, whatever) is excluded from a full enjoyment of rights, where such exclusion is justified by norms accepted by most members of society, including the excluded? How do we get from the norms of such individuals to the free and open society?
LikeLiked by 1 person
I can hardly speak for Gaus, having read just the first 100 pages of Tyranny of the Ideal and none of Order of Public Reason. But my first thought is that he seems to assume the context of a contemporary Western democracy, so that the question of whole classes of competent adults (slaves, women) being simply excluded doesn’t arise. Moreover, the paper discusses various idealizations of his model (secs. 2.1–2.3): everyone is a competent moral agent, everyone regards a rule as moral only if (a) it is not detrimental to anyone’s basic interests, (b) a person would endorse it regardless of his social position, etc. So I guess we’re not talking about classical Athens here! Frankly, it’s disheartening to realize that the scope of his theory is so limited.
Let’s set aside the idealizations and ask what a theory of the form I described in the post ought to say about a society like classical Athens. I would propose that, descriptively, it does pretty well. The basic elements of the theory are: (i) Everyone has their own individual evaluative standards, but (ii) people acquiesce in a set of social rules even if they don’t personally agree with some of them, so long as they believe that other people insist on them, for the sake of social cohesion and out of fear of the moral disapproval of others, and (iii) it is this process of individual evaluation (plus compromise) that supplies whatever normativity the rules possess, not some objective “philosophical” standard. As regards the Athenian citizens, there would be no reason to suppose this theory is inadequate, I suppose. With regard to women and slaves and (to a lesser extent?) metics, I doubt their opinions count for nothing at all, but obviously their level of influence in the “negotiation” of social rules is much reduced. As you say, they may well accept the legitimacy of this. But I doubt they have much enthusiasm for rules counter to their interests! So, there would be pushback, but muted. This is not a prescription for much change in the direction of their liberation. But that sounds descriptively accurate to me.
Normatively? I suspect Gaus would have the philosophical perspective come to the rescue, since he says he departs from Hayek in having more confidence in the ability of the philosophical perspective to assess the merit of social rules. For myself, I think I’m a little more inclined to side with Hayek. This is a strange thing for me, since I’ve always thought of myself as firmly on the side of their being objective moral standards. But I notice that the comments I post on this blog always seem to run counter to that! I’m coming to think that social morality, such as codes of “rights,” are a cultural evolutionary product like natural language or canoe construction practices, and that the ultimate standard of evaluation for all of these is the degree to which they are adaptive. In that light, Gaus’s theory is one possible story about how the evolutionary process works for social rules.
LikeLiked by 1 person
Thanks for that post, David. I did read SMPIP — and got a lot out of doing so. Though Gaus does not explicitly say so, I think he is of the following school of opinion, at least with regard to moral normativity: (*) the normativity here comes from us, from our social practices of holding each other to account, not from anything prior to or more basic than that. This does not, as he points out, preclude evaluating the target standard or code. It does not preclude evaluating it, even, by reference to (a) objective normative facts of some sort — as against evaluating it relative to (b) the values or goals we happen to have or that creatures like us almost always have, these having only (instrumental) motivational and behavioral-tendency import, not objective (instrumental) normative import.
With regard to these prior, moral-code-evaluating elements, Gaus seems to be an instrumentalist who (i) accepts the idea that goal-relative or value-relative instrumentality is normative and (ii) takes moral values or goals of some sort to be among the basic values or goals and are hard-wired into creatures like us. The question he is interested in here is something like this: how, starting with large differences in the rules or standards according to which sub-groups of humans “hold each other to account,” how do we — and how and why might it be advisable to — generate common rules for governing larger groups of people composed of the sub-groups?
I think there is a lot right with this framing of how moral normativity works. But I think it also gets some things wrong. First, it seems that we are responding correctly when we resent or get morally angry when people treat each other (or the community, society, etc.) in certain ways that grossly offend any sort of moral code. For example, we tend to do the “holding to account” thing (both attitudinally and behaviorally) when we register one person causing pain or harm to another for trivial reasons or simply for the enjoyment of doing so, dominating them, etc. And these tendencies seem to fit prior normative standards: in doing this, we are responding correctly to the stimuli, conforming to a standard that applies to us and is in some sense prior to the response. If these responses are normatively appropriate, we have some constructed-moral-code-relevant prior normativity that is not merely instrumental. I don’t think characterizing this phenomenon as our “valuing sharing a moral code with others” (Gaus) does it justice (the characterization is too instrumental and too content-insensitive). And, if this is true, then, secondly, even if the moral-code-type “normative force” does not exist until people start holding each other to account, moral codes having these sorts of basic content-sensitive elements has both motivational stability and, I would argue, prior normative backing. So, in this sense and in this respect, we can evaluate moral codes as being incorrect in basic respects because they fail to align with basic moral response.
If I integrate these elements into a framework somewhat like Gaus’ — in that the normativity of morality is taken to be something that we create or construct socially — we (and ancient Athenians) can and would appropriately object to the way ancient Athens treated its slaves, its women and (to a lesser extent) its metics. Which is to object to the moral and political rules or codes that governed social and political life in ancient Athens. That counts as bringing the “philosophical perspective” to bear. However, I’m not sure that Gaus has the resources in his particular philosophical perspective to do this, even if he says he would or wants to. This kind of move, it seems to me, is not well-integrated with the rest of his assumptions. In this way, Gaus*, but not Gaus, might have a decent response to Rod’s concern.
[What follows is the speculative, what-I’m-inclined-to-think portion of the show. Beware the drop-off in quality, wise caveats, good judgment, etc.!]
On the framing that I’m suggesting, Gaus is right in holding that, in general, we do not evaluate our moral code in some given context by assessing it against prior normative facts about which moral code is correct. Rather, at least on my telling, beyond the cases of obvious, blatant abuses of others (the community, the ability of the community to adequately comply with norms), we ask something like this question: given the (normal, non-pathological) values that people have and given the circumstances and institutions of our society, how should we draw the line between adverse actions and attitudes that are all-in worthy of moral objection (punishment, resistance) and those that are not?
This strikes me as an all-in practical or normative question, not simply a moral one. I worry that — even apart from “just look at reality to see if our moral code is correct” wrong-headedness — we mistake the desirability of moral improvement (specifically of greater and greater sensitivity to relevant adverse effects of actions and attitudes on all persons, on society, on society’s capacity for adequate moral norm-compliance, etc.) for a normative (and moral) requirement to adopt or abide by moral standards and rules that reflect the additional stringency. That potentially short-circuits a process of consensus based, in important part, on varying individual values (and circumstances, standing institutions, etc.) to figure out what it makes most sense to hold each other morally accountable for. I suspect that moral progress is, at least often, simply a desirable thing driven forward by enabling circumstances (of techology, of wealth, of wise social institutions) that lower the cost of increased sensivity to adverse actions and attitudes. And therefore, at least often, it is not something that we can, or should, be frogged-marched (or otherwise coerced or manipulated) into. Maybe this line of argument, if it continues to appeal to me, will end me up with criticisms of ideal theory that are similar to Gaus’?
LikeLike