There’s been a lot of politics on this blog lately. Though I am in some sense a historian of political philosophy, I don’t much like politics, but I do like philosophy. So I thought I’d try to make a more purely philosophical contribution. It’s not that politics is unimportant. It’s just that it’s, so, well…frustrating. Then again, so is much contemporary philosophy. So perhaps I’ll just be trading one source of frustration for another. Let’s see.
What follows is a first attempt to get straight on some issues that have been simmering in the back of my mind for a while. I have no doubt that my formulations of these issues will be somewhat crude and in need of considerable qualification, if not revision. But that’s what I need you for.
Contemporary philosophers often appeal to a distinction between rationality and morality. The distinction comes in different forms, and in some sense it is innocuous enough. In the broadest, vaguest sense, rationality is a matter of what we have reason to do. Morality, in a similarly broad, vague sense, is a matter of how we ought to treat one another. There is probably no way to characterize this distinction that is neutral between all the various conceptions of rationality and morality, but that is hardly a surprise. What interests me is the thought that it is an open question whether morality is rational. For this question to make sense, we must be able to give some determinate content to the concepts of rationality and morality independently of one another. But I’m not sure we can. Rather, I’m not sure we can if we take the concept of morality to be truth apt. Otherwise put, if we think moral claims can be true, then I doubt whether it can be an open question whether we have reasons to be moral.
Philosophers who otherwise deeply disagree with one another nonetheless often agree that we can appeal to a “moral point of view” that yields some substantive, if not not entirely determinate, claims about what we ought to do or how we ought to treat one another. They disagree, of course, about how things look from that point of view and even about what the distinguishing characteristics of that point of view are. Some claim, for example, that from a moral point of view it can never be right to treat one person as a mere means to some other person’s ends. Others claim that from a moral point of view every person’s interests or welfare matters equally. Proponents of the first sort of claim tend to insist that morality requires us to respect other people as ends in themselves, and hence forbids treating people in ways that they do not endorse. Proponents of the second sort of claim tend to insist that morality requires us to look to what is best overall, and hence to do whatever produces the best consequences for everyone. The former sort of view, standardly described as “deontological,” sees morality as centrally concerned with restraints on our conduct, restraints generated by the status of other persons as autonomous beings with intrinsic moral worth. The second sort of view, standardly described as “consequentialist,” sees morality as centrally concerned with producing what is best from an impersonal point of view, a point of view from which each person counts for one and only one and the most important thing is bringing about the states of affairs in which there is the most overall happiness or well-being possible.
Now, the wiser members of the philosophical profession — whatever their views — have recognized for at least as long as I have been alive that the deontological/consequentialist distinction is crude and inadequate. Furthermore, anyone who has paid attention to Anglo-American academic philosophy in the past thirty years knows that there is supposed to be a third alternative to consequentialism and deontology, namely “virtue ethics.” Moreover, virtue ethics in at least some of its forms is supposed to issue a challenge to the notion of morality as something fundamentally distinct from considerations about what kind of human life is good for the person who lives it. Yet despite the widespread acknowledgment of the excessive simplicity of the old deontological/consequentialist dichotomy, appeals to a distinctive and at least roughly determinate “moral point of view” persist. Consider, first, this claim from a recent book by a leading virtue ethicist:
One major problem facing any theory that seeks to show that well-being requires virtue (or the related thesis that rational self-interest requires morality) is that we are a small group species, but morality requires us to transcend our group-ism and treat all human beings morally. Another major problem is that our own selves — our points of view, our values, our desires — often seem far more real to us than other people’s points of view, values, or desires, even when those other people are members of our own small group. Yet morality requires that we recognize the equal reality of other people. (Neera Badwhar, Well-Being: Happiness in a Worthwhile Life, 15).
Consider, second, the following claim from one of the foremost living defenders of moral realism:
I shall argue that there is no successful moral objection to utilitarianism from the personal point of view. There are various ways in which utilitarianism can accommodate the moral significance of the personal point of view. It must be conceded, however, that these strategies do not eliminate all conflict between utilitarianism’s impartiality and the personal point of view. But this residual conflict does not constitute a moral objection to utilitarianism, for, in this conflict, the personal point of view represents worries about the rationality or supremacy of utilitarian demands. These worries are properly understood as worries about rather than within morality and so do not threaten and, indeed, support a utilitarian analysis of morality. (David Brink, ‘Utilitarian Morality and the Personal Point of View,’ 419).
Brink explicitly appeals to a notion of morality that contrasts with rationality. There is utilitarian morality, with its fundamental impartiality, and then there is rationality, which is associated with the personal point of view. There may be legitimate doubts about the rationality of utilitarian morality, but these cannot be doubts about the morality of utilitarianism. The rationality of utilitarian morality may depend on the personal point of view, but the impersonal moral point of view tells in favor of utilitarianism as a theory of morality independently of what we should say about its rationality. Badhwar, too, appeals to a notion of morality that yields determinate content independently of any claim it may have to be rational: morality requires us to treat all human beings morally (whatever exactly that means) and that we recognize the equal reality of other people (whatever exactly that means). Both seem to think that we can know what morality requires in advance of knowing whether morality is rational, that is, whether individual agents would be rational in respecting the demands of morality or irrational in violating those demands.
Badhwar and Brink are also both moral realists. That is, they both think not only that claims about morality can be true or false, but that there are some true claims about it. Thus they not only oppose various forms of non-cognitivism, on which moral claims or judgments are really ultimately just expressions of feelings or desires or commitments or some other sort of non-cognitive attitude; they also both oppose any form of conventionalism or relativism on which what makes a moral judgment true is fundamentally a matter of the conventions adopted in some particular society or group. In other words, both think that claims like “morality requires that we recognize the equal reality of other people” or “morality requires that we take an impartial point of view” are not only true judgments with genuine cognitive content, but that their truth does not depend on any particular set of social conventions or practices. As they see it, morality requires whatever it requires — recognizing the equal reality of other people, taking an impartial point of view — for all human beings everywhere.
This is a familiar enough sort of claim. After all, when Jefferson wrote “that all men are created equal, that they are endowed by their Creator with certain inalienable rights, that among these are Life, Liberty, and the Pursuit of Happiness,” he was not purporting merely to describe a conception of morality endorsed by some people somewhere and sometime. He knew full well that his claims would not have secured agreement among all people at all times, but he believed that they were true nonetheless. That is the kind of truth that Badhwar and Brink claim for their conceptions of what morality requires. They would not regard someone who disagreed as simply operating with a different conception of morality or as opting not to take up the moral point of view. Rather, they would regard someone who disagreed as making a mistake, as failing to grasp a truth in much the same way as people who believe that the elegant adaptedness of many organisms to their environment could only be explained as the product of intelligent design fail to grasp a truth about evolution via natural selection. Not all who have failed to grasp that truth were guilty of epistemic vice — certain people in certain times and certain places have been reasonable in thinking that only an intelligent designer could have brought about such adaptedness. But however justified anyone who believes otherwise might be, moral realists like Badhwar and Brink maintain that their claims about what morality demands are true, and anyone who believes otherwise is in fact mistaken.
So Badhwar and Brink both embrace moral realism and a conception of morality as only contingently related to rationality. It is just this combination of claims that strikes me as false, and perhaps even incoherent.
What exactly would it be for a moral judgment to be true? Morality purports to tell us what we ought to do or how we should treat one another. So far as I can see, then, the truth of a moral judgment would have to be the truth of a claim about what we have reason to do. But if the truth of moral judgments just is the truth of judgments about what we have reason to do, then the relationship between morality and rationality cannot be contingent even in principle.
To appreciate this point, contrast morality as realists conceive it with the rules of etiquette (some readers will recognize here that I am borrowing this contrast from Philippa Foot’s ‘Morality as a System of Hypothetical Imperatives’). It seems clear that we can make true or false claims about what etiquette requires without even suggesting that we have any reason at all to do what etiquette requires. We might even make true claims about what etiquette requires while staunchly maintaining that we have good reason not to do what etiquette requires. According to Herodotus, visitors to the Persian King would customarily greet him with an act of proskynesis — literally something like “dogging toward,” prostrating oneself on the ground like a dog as an expression of submission and respect for the king’s great authority. Greeks, as the freedom-loving people they were, would haughtily reject the notion that they should lie down on the ground like dogs when approaching the Persian King on some diplomatic business. But those same Greeks would acknowledge that Persian etiquette requires visitors to prostrate themselves before the king. This requirement is simply a fact, and to articulate it is simply to describe a fact about Persian etiquette, not to express an emotion or a desire or a commitment, and certainly not to claim that one has a good reason for action.
Morality, however, cannot be understood on the model of etiquette, at least not without abandoning moral realism. Admittedly, if we were to conceive of morality on the model of etiquette, we could make true descriptive claims about what morality requires. But we would simply be making true descriptive claims about a conventional social practice, and that is precisely not what moral realists take themselves to be doing when they talk about the requirements of morality. When Jefferson wrote the opening of the Declaration, he did not mean to say simply that there is a conventional social practice according to which one should respect all other human beings’ lives. He was talking about a requirement that is binding on us regardless of whether we adopt some convention, regardless even of whether we recognize it at all. If we were to conceive of morality on the model of etiquette, we would need to recognize a plurality of conflicting moralities, just as we recognize a plurality of conflicting systems of etiquette. But moral realists, however much variation they recognize in the demands of morality, do not recognize a plurality of conflicting moralities. Moral conventionalists, relativists, fictionalists, and other sorts of anti-realists could conceive of morality in this way. Realists cannot.
If that’s right, though, then moral realists cannot in fact specify the content of morality and rationality independently of one another. Hence Brink could not successfully defuse objections to utilitarianism from the personal point of view on the grounds that they are not objections about the morality of utilitarianism, but about its rationality. To show that utilitarian morality is true would require showing that we have reasons to act in accordance with it, and that is precisely what objections from the personal point of view deny. Similarly, objections of the sort that Badhwar sketches — that well-being cannot require virtue because virtue (“moral” virtue, in any case) demands attitudes and actions that are not necessary for our well-being and are often even contrary to it — could only get off the ground if what we have reason to do is not grounded in what promotes our well-being. Otherwise the claim that morality requires such-and-such attitudes and actions could not be true in any way except the way in which the claim that Persian etiquette requires proskynesis is true.
At this point an important qualification is in order. Moral realists need not maintain that all human beings always have decisive or over-riding reason to do what morality requires. One might think that other reasons can sometimes, perhaps even often, defeat whatever reasons we have to do what morality requires. In that sense, the relation between morality and rationality will be contingent rather than necessary. But what we should then say is that the relationship between morality and decisive reasons for action is contingent. We cannot say that morality requires something that most people most of the time have no reason at all to do. Or, rather, we cannot truly say that without abandoning moral realism.
Many consequentialists and deontologists would likely agree with me so far. They would concede that the truth of their moral judgments is the truth of claims about what we have reason to do. They would insist, however, that we have sui generis moral reasons, reasons that differ in kind from prudential reasons. Hence, they might say, it remains an open question whether morality is prudentially rational, but not whether it is morally rational. Views of this sort can make sense of the idea that there is an open question about the rationality of morality while retaining moral realism. So, problem solved?
Not quite. Defenders of this sort of view face two challenges. The first is that they owe us some account of what these sui generis moral reasons are and why they are genuine reasons — that is, why they are considerations that really do tell in favor of our acting in one way rather than others. The second is that they owe us some account of how these moral reasons fit together with prudential reasons into a unified conception of rationality. As I see it, standard accounts of sui generis moral reasons are implausible at best and there is no obvious solution to the problems posed by the notion of two fundamentally different kinds of reasons that cannot be assessed in terms of some more general notion of rationality.
I obviously can’t survey all the theories of moral reasons that philosophers have offered, let alone provide a decisive refutation of them. So let’s just consider two broad approaches, again one associated with consequentialist theories and the other with deontological theories.
The consequentialist wants to say that the overall goodness of states of affairs in the world yields fundamentally agent-neutral reasons for action. These reasons are agent-neutral because they are not reasons for any agents in particular but for all agents everywhere. They are fundamentally agent-neutral because they do not arise out of any agent-relative reasons, that is, reasons that are at bottom reasons for you to do something and not necessarily for anyone else to do that thing. The precise formulation of the distinction between agent-relative and agent-neutral reasons is a topic of dispute, and one that involves more technical logical notation than I care to indulge in. Fred Miller gives us a nice working definition when he writes that “a value or reason is agent-relative if its description includes essential references to agent who has that value or reason, and agent-neutral if its description does not include an essential reference to a person who has that value or reason.” (Nature, Justice, and Rights in Aristotle’s Politics, p. 131). But for all the ingenuity and sophistication that has been put in to defending the claim that there are basic agent-neutral reasons, they seem rather mysterious. When we remember that a reason for action is ultimately just a consideration that tells in favor of acting a certain way, it should seem odd to suppose that there are considerations that are not essentially considerations for any particular agents, but simply considerations, considerations that happen to be considerations for you because they are just considerations, period.
Agent-neutral reasons may seem indispensable, however, if we conflate agent-neutrality with objectivity, universality, or non-instrumental altruism. If we think that agent-relative reasons cannot be objective, that they cannot be such that all particular agents have reasons of precisely the same kind, or that they can only take the interests of others into account in a purely instrumental way, then we — well, many of us, anyway — will be inclined to believe that there must be agent-neutral reasons.
In fact, universality and objectivity are sometimes invoked to show that rationality as such is agent-neutral. This approach is perhaps more common among those who favor a deontological approach. Kant famously argued this way in the Groundwork: reason as such is universal and objective, and it therefore cannot find any fundamental significance in the fact that my interests are mine or that my humanity is mine; the universality and objectivity of reason should lead us to respect humanity wherever it is instantiated, not simply in ourselves. But it is not only Kantians who reason in this way. Here is John Finnis, a Thomistic Aristotelian, though one who is often described as a deontologist rather than a virtue ethicist (the awkwardness of the classification seems like more evidence that the classifications are problematic):
Next, the basic goods are human goods, and can in principle be pursued, realized, and participated in by any human being. Another person’s survival, his coming to know, his creativity, his all-round flourishing, may not interest me, may not concern me, may in any event be beyond my power to affect. But have I any reason to deny that they are really good, or that they are fit matters of interest, concern, and favour by that man and by all those who have to do with him? The questions of friendship, collaboration, mutual assistance, and justice are the subject of the next chapters. Here we need not ask just who is responsible for whose well-being…But we can add, to the second requirement of fundamental impartiality of recognition of each of the basic forms of good, a third requirement: of fundamental impartiality among those human subjects who are or may be partakers of those goods.” (Natural Law and Natural Rights, 106-7)
Finnis here infers that it is a basic requirement of practical reason that we be fundamentally impartial among people. But he infers fundamental impartiality simply from the objectivity and universality of reason. Reason recognizes that the same basic goods are good for you and me and every other human being, and that they really are good insofar as they are aspects of human well-being, regardless of whether someone believes that they are good or desires them. And so, Finnis thinks, I could only reject the requirement of fundamental impartiality by denying that these goods are really goods for other people. But it is, to put it mildly, hard to see how I would be committed to denying the objective goodness of other people’s well-being or the universality of certain goods as aspects of human well-being simply by rejecting the principle of fundamental impartiality.
A similar non-sequitur seems to plague theories like Kant’s. We could agree that reason as such identifies universal and objective features of humanity, and that if you have an objective reason to act in a particular way in a particular set of circumstances, then anyone else who finds herself in precisely those circumstances will have an objective reason to act in precisely the same way. Nothing about those claims would commit us to the notion that reasons for action are not essentially reasons for the particular agents who have them, reasons for them rather than for any other agents. Perhaps objective reasons must be universalizable; it does not follow that they are in fact universal. Even if every particular agent has reason to do precisely the same thing, it need not follow that the reason is agent-neutral. There is nothing incoherent in the notion that you and I and every other individual has an agent-relative reason never to lie or commit suicide. Kant is probably wrong to claim that we always have decisive reason not to do those things, but the truth or falsity of those claims does not depend on whether there are any agent-neutral reasons.
I’ve barely scratched the surface of accounts that defend agent-neutral reasons, and I wouldn’t expect anyone who believes in them to be moved by what I’ve said so far. But even if we grant that there are agent-neutral reasons, if we also grant that there are agent-relative reasons, then we seem to be stuck with two fundamentally different kinds of reasons that cannot be assessed in terms of some other, broader sort of reason. Of course, if I have an agent-neutral reason to sell all my belongings and donate the money to Oxfam, then I can appeal to that reason against my agent-relative reason to keep building up my library. But I can also appeal to my agent-relative reason to keep building up my library against my agent-neutral reason to sell it and donate the proceeds to Oxfam. If we have agent-neutral and agent-relative reasons, each equally basic, then there will be no rational way to resolve conflicts between them. We end up with what Sidgwick called the “dualism of practical reason”: two equally compelling sources of reasons that conflict, neither with priority over the other or with the rational resources to defeat the other from any neutral point of view.
I suppose there might not be anything absurd about the dualism of practical reason, so that we can’t just reject it out of hand. But the notion that there is no rational way to assess the relative merits of conflicting considerations that genuinely tell in favor of acting in incompatible ways seems to threaten the coherence of rationality altogether. It is not simply that values and reasons are incommensurable and cannot be compared on a single quantitative scale or neatly placed in an ordinal ranking; one can recognize that kind of incommensurability without giving up on rationality in decision-making. Nor is it simply that some of our choices will be rationally arbitrary because reason does not require any one of a set of incompatible courses of action. Rather, if we have to accept the dualism of practical reason, that kind of arbitrariness will be pervasive and affect every choice we make. What sense could we then make of the idea that these two different sources of reasons were really making demands on us in the first place?
Perhaps the clearest indication of the difficulties posed by the dualism of practical reason is that most defenders of agent-neutral reasons do not allow that we also have agent-relative reasons. Thinkers as radically opposed as Christine Korsgaard (perhaps Kant’s leading contemporary heir, at least in Anglophone philosophy) and Peter Singer (perhaps the leading contemporary heir of Bentham) agree that only agent-neutral reasons are genuine reasons (Korsgaard’s most recent work seems to take a view that complicates the distinction, but I confess I haven’t studied it closely). But basic agent-neutral reasons seem problematic enough; the idea that they are the only basic reasons we have seems even less plausible (on this point, and for much of the rest of what I have to say about these things, I am indebted to Mark Lebar’s ‘Korsgaard, Wittgenstein, and the Mafioso’ as well as Mark Murphy’s Natural Law and Practical Rationality).
But if we recognize only agent-relative reasons, where does that leave morality? Can we get a satisfactory theory of justice out of such bare beginnings? That depends, of course, on what satisfies you, but plenty of people have thought not. I like to think that we can, that Aristotle had such a theory, and that his basic framework can sustain a theory of justice that would satisfy most of the intuitions that lead people to think that agent-relative theories like what we find in the opening of Republic II or in Leviathan are unsatisfactory. But alas, that’s a topic for a whole other book. I hope to write that book someday, but for now I’ve already written too much.
(1) You are right, and I think there is broad agreement that, it is incoherent to say that one has no reason (or that it is rational in no respect) that one act in accordance with a moral obligation. Because morality-talk, unlike etiquette-talk, is talk about actual obligations and hence reasons. Though one might treat morality like etiquette (by abstracting from the idea that moral claims are claims about our having reasons to do the moral things) and in this sense ask “Why think morality is rational at all, in any respect?” this is using ‘morality’ – or ‘obligation’ if one is concerned with specific moral claims – in a special, stipulated sense. This may be useful for specific explanatory, theory-building and theory-testing purposes.
(2) It is a going view, if not the going view, that reasons for everyone to do things are reducible to reasons for particular agents to do things – plus some account of why there is some sort of necessarily universalization over particular agents to get the universal (“agent-neutral”) claim. For example, Mark Schroeder suggests that you get the requisite necessary universality (or at least a kind of necessary universality, of the only sort that there could plausibly be) by way of massive overdetermination regarding the instrumental (and non-instrumental) utility of moral ends (value) and action-types (obligation). If you want much of anything that human beings are born wanting, you will have reason (if not significantly strong reason) to promote moral ends and engage in moral behavior (or follow moral rules).
(3) A view like Schroeder’s does not require any kind of “dualism” about practical reason. However, we should want a good characterization of what the means and ends of morality are in order to understand what it is – maybe it is what Mark says, maybe something else – that makes for the necessary universality. It suspect that it will be helpful to construct such a story/theory about morality keeping in mind that there is a more general, prior account of a similar sort of necessary universality with regard to rationality generally (or reasons generally). We would need such an account if we were merely practically rational creatures, not moral creatures (and here I use ‘moral’ in the non-normative or merely-conditionally-normative sense that fits with etiquette-talk). Maybe, when we get such an account, there will be some interesting things to say about moral versus non-moral reasons and when or under what conditions the latter threatens to overwhelm the former.
(4) I find impartiality quite interesting. In what respect (or from what “standpoint”) does one, for example, have as much reason to promote the well-being of some stranger (or enforce the legitimate claim of some stranger) as one’s own? When, if ever, is this respect (whatever it is) of having reason to impartially promote important things for agents irrespective of whether the agent is yourself (or those close to you personally) so strong that partial reasons get swamped? I find the following (partial or conditional) hypothesis promising: if each of us has conclusive reason (surely partly non-instrumental) to establish and maintain communities in which each of us bears the moral relationship to others (whatever this relationship turns out to be), then each of us has conclusive reason to promote any given person doing what is required to be in the moral relationship with others. Such conclusive reasons would be relative to the relevant sorts of choice-situations, those in which achieving or maintaining a moral community is at stake. Even if the relevant ends here are extremely important (even if, say, they had lexical priority to all others in most situations), there would in all likelihood be choice situations in which the reasons associated with these ends would not be relevant to deciding what to do.
(5) It seems unlikely that the strength of the normative valence in favor of doing moral things (whether this is promoting moral value or conforming to some instantiation of a schematic specification of a rule) would be independent of one’s non-moral ends/reasons. And it seems unlikely that non-moral reasons would not have a major role in determining the best way to fill in each rule-schemata (as when partial, non-moral reasons determine, in part, when it is acceptable to lie).
You likely know the recent literature better than I do, but my sense is that Schroeder’s sort of view is by no means the dominant way of thinking about reasons and morality. Schroeder is basically a Humean; Humeans as such do not believe in basic agent-neutral reasons. I’m not a Humean, but I don’t think Humeans face the kinds of difficulties that I’m concerned about in the post. Consider, for example, your claim about Schroeder’s view that “if you want much of anything that human beings are born wanting, you will have reason (if not significantly strong reason) to promote moral ends and engage in moral behavior (or follow moral rules).” Kantians and consequentialists of many stripes would reject this claim, at least insofar as it is supposed to offer us an adequate account of moral reasons. Given the conception of morality that they operate with, I think they’re probably right to reject it; it doesn’t seem plausible to suppose that so long as I am not psychologically bizarre, my desires will give me ample reason to respect all human beings as ends in themselves, and in any case philosophers like Korsgaard and Darwall think that our reasons for respecting others are not derived from the content or authority of our desires anyway. Likewise, it certainly doesn’t seem plausible to suppose that so long as I am not psychologically bizarre, my desires will give me ample reason to accept consequentialist theories like Singer’s or Kagan’s, which both demand considerable individual sacrifices.
What troubles me about your Schroederian take on things is that it still seems to help itself to a concept of morality that allegedly has some determinate content independently of whether we suppose that we have reason to do what morality requires. What I’m resisting is the idea that we can give any content to that notion independently of what we think we have reason to do. Many philosophers would agree, but it doesn’t follow that they aren’t still operating with some such notion. Part of the trouble here is that, if we set aside questions about the rationality of morality, there is no single conception of morality — indeed, I doubt that there is even a single concept of morality — that deserves to be privileged over others, anymore than there is a single conception of etiquette that deserves to be privileged over others. In effect, what I worry that your Schroederian reflections — and a whole, whole lot of other contemporary philosophy — are doing is (i) conceding that of course morality must be a matter of what we have reason to do, so that it cannot be understood on the model of etiquette, but then (ii) arbitrarily selecting one set of moral ideas and regarding it as paradigmatic if not definitive of ‘morality,’ and thereby reasoning no differently than one would if one simply selected one among many systems of etiquette as what ‘the’ proper thing to do is. In fact, however, different theories of reasons and rationality will have different implications for different conceptions of morality. It is no good to obscure this fact by talking of ‘morality’ and ‘the moral’ or ‘moral reasons’ as though there were a single coherent notion shared by all parties to the debate.
I’m pretty sure you agree with this point, because your own contributions here about rights and reasons and the like pretty clearly illustrate the interdependence of a theory of rationality and a theory of morality, an interdependence that has pretty strong consequences for the content of the moral theory. You’d agree, wouldn’t you, that your theory of reasons will not support a utilitarian theory of morality like Singer’s or a natural law theory like Finnis’?
Ultimately, I am inclined to make the strong claim that we should not try to theorize about morality and rationality separately from one another. We should not try to have a theory of reasons over here and a theory of morality over there; our ideas about justice, benevolence, and other ‘moral’ virtues should be part of our broader thinking about practical reason. If that’s right, then an awful lot of contemporary work is wrong-headed, because it supposes that we can develop adequate theories of practical reason without paying much attention to ‘moral’ matters, while a lot of moral theory is divorced from broader thinking about practical reason and hence ends up giving us neat, sophisticated, even inspiring moral theories that nonetheless make little or no contact with what most of us reliably have reason to do.
LikeLiked by 1 person
I don’t have too much too add, because I basically just agree with David here. But I wanted to add one thing to David’s first paragraph, intended as a friendly extension (not even a friendly amendment).
So once again, take Schroeder’s view: “if you want much of anything that human beings are born wanting, you will have reason (if not significantly strong reason) to promote moral ends and engage in moral behavior (or follow moral rules).”
What David says in criticism of this view seems to me just right. What I’m doing here is explaining why it’s right.
If we take the claim literally, then it says: we should build morality on our wants at birth plus the instrumental principle. That’s an obvious non-starter, and for that reason probably an uncharitable way of reading the claim. Our wants at birth, taken literally, have nothing at all to do with morality. They’re self-centered in the literally infantile sense. An infant only has reasons in a very metaphorical sense, and adding the instrumental principle to that would get you nowhere.
Taken more charitably, the claim says that if you take a psychologically normal agent, you can build morality on an idealized account of such an agent’s desires plus the instrumental principle.
The problem is that what makes a psychologically normal agent normal is his upbringing. Only a proper subset (frankly, a rather small proper subset) of the sum total of ways of bringing someone up will produce a psychologically normal agent. There are far more ways of screwing this up than of getting it right.
It seems to me that if the idea of a psychologically normal agent is to yield morality by means of instrumental rationality, that’s only because the concept of a “psychologically normal agent” depends on a morally-loaded conception of human flourishing in the first place. Otherwise, for the reasons David indicates, I don’t see that the idea has any initial plausibility.
Now suppose that the only way to produce a psychologically normal agent is for that agent to be brought up by “upbringers” (parents and others) who aim, for specifically moral reasons, at producing psychological health in the agent. In other words, these upbringers won’t produce a psychologically normal agent if they just go through the motions of “raising a normal agent,” reading the instructions out of a recipe book, but lacking the right motivations for it themselves. They have to have the right motivations for raising the child a certain and have to act on them qua right motivations. They’ll only have a hope of success if they do.
I doubt that there is any way to conceptualize “psychologically normal agent” by abstracting from that developmental process. If so, the very idea of a “psychologically normal agent” presupposes an account of the reasons for actions that the agent’s “upbringers” have for facilitating the agent’s development toward psychological normality. In other words, the upbringers’ moral reasons for raising a healthy child are partly constitutive of what psychological normality is. Put yet another way: psychological normality presupposes a moral conception; you can’t build a conception of morality on some non-moralized conception of “psychological normality.” There is no such thing.
So my hypothesis/criticism of the Hume-Schroeder approach is this: yes, your desires may give you reason to be moral if you have the “right” desires in the first place–i.e., (trivially) desires such that acting to satisfy them puts you in the moral ballpark. But the best account of what it is to have the right (=normal) desires makes essential reference to a moralized account of what it is to raise a child in a developmentally healthy way.
For an approach like Schroeder’s to work (at least Schroeder as described above), he has to smuggle the whole moralized account of development into his conception of psychologically normal agent, then claim that the normal agent’s desires plus instrumental rationality yields morality. That will work, but only because what’s doing the real work here, off-stage, is a moralized account of desire via some account of moral development, along with the thick conception of rationality that employed by “upbringers” of normal agents. The work is not being by some non-moralized account of desire plus the instrumental principle.
Take the whole thick Aristotelian story I’ve told out of the picture, and you’re just left with infantile desire–what we’re literally born wanting. But plug it back in, and you’re giving the Humean a moralized account of our wants that draws on a thick conception of flourishing and practical rationality that’s not supposed to be part of his account.
I don’t know whether my comment here really fits what Schroeder actually says. I’m really discussing PoT-Schroeder rather than real Schroeder. But my hunch is that something like this will be true of any Humean account.
LikeLiked by 1 person
Thanks, Irfan. This gives me an excuse to distinguish my “Humeanism” from Schroeder’s simple, plain-Jane-plus-some-tricks Humeanism!
I did not mean to be endorsing anything like Schroeder’s view of how you might explain universal (“agent-neutral”) reasons. My point was that, among theorists of reasons for action and practical rationality, there is pretty widespread agreement that the basic phenomenon (of having reasons to respond to things generally) is individual-centered and you need a story to explain any reasons that are universal in some non-accidental way (including reasons to do things that we think, based on a common conception of morality, morality recommends or requires). It is true that people who are focused mainly on morality and are then pressed to square their views with the going theories of reasons and rationality may differ or struggle to integrate their preferred moral view into a plausible theory of practical reasons and practical rationality.
On Schroeder: his position is somewhat worse than what you indicate because what follows from a standard, plain-Jane Humeanism like his is that *all and only* the non-instrumental desires *that you presently have* produce reasons for action. His position is really just this: pick almost any set of non-instrumental desires – actually present in the agent at the time of decision – that is not specifically designed to frustrate the existence of one having *any* reasons to do things that count as moral and you get *some* reason to do what morality recommends and requires. Because his account of the strength of reasons is entirely distinct from his account of what we have any reason at all to do (remember, you have reason to eat your car on his view!), the claim here is actually really weak, despite appearances. You can see how trivial it is by noticing that there is no premise in the argument to the effect that there are basic, typical features of human social existence that are indispensable for anyone achieving a wide range of typical individual desires/aims. He is being too damned clever by half and not keeping his eye on intuitive plausibility (but then again he places little weight on intuitive plausibility; his focus is on the range of phenomena to explain; in philosophy, it is very hard, so let’s break the task down and see what machinery we can invent that will take care of any given one sub-issue that can be peeled off; we make progress by working out such hypotheses time and again, even when they seem crazy; eventually, we end up getting some of the pieces of the puzzle right, which amount to explaining what needs to be explained).
I’m Humean in the following sense. (a) Basic agent-specific benefit or value (and consequent instrumental value of the agent’s actions and other things as well) is constituted by *the affective states of the agent* (and, yes, this requires further justification/explanation). (b) Rationalizing (or belief-relative) reasons (for action, for desiring, for believing, for admiring things, for having social attitudes like resentment) depend on rational (or rationalizing) response, such responses are normative in a basic way that is distinct from basic value, that these responses are normative, not simply reflective of the proper-functioning of relevant brain circuits, because we have basic *motivations* to exhibit the requisite proper-function-realizing patterns of attitudes with these brain circuits.
We can think of having objective normative reason either in terms of instrumental value or in terms of the response that would be rational if one knows the relevant facts about what promotes, achieves or realizes what. There are two different features here (necessarily correlated at least in some cases, particularly with respect to reasons for action) and though it may be important to distinguish them in some theoretical contexts, they are not generally distinguished. (In what follows, I’ll consider the relevant “objective” evaluations to be properly or narrowly evaluative, not properly or narrowly normative or idealized-rational.)
On this sort of view, we can evaluate non-instrumental desires (conative states) relative to basic affective states (and perhaps other non-instrumental desires) objectively and instrumentally. We might similarly evaluate non-basic affective and emotional states – things like resenting another person, admiring another person, experience satisfaction upon completing a task, etc. Though such objective, instrumental evaluation comes along with an ability to engage instrumental rationality (so that there are some rationalizing reasons in the ballpark), we do not seem to have rational response processes that spit out non-instrumental desire outputs given non-instrumental desire (or affective state) inputs. This is not part of reasoning or rational response (though it could be that we form non-instrumental desires consequent on such). If this is right, we are stuck training ourselves into (or having others train us into) having the non-instrumental desires that it is good for us to have (including, presumably, the right ones for being a moral person).
Also, if this kind of view is right, there are practical rational responses that are not responses to desires in any obvious way. For example, the enkratic response: I judge that PHI-ing is the best thing to do in the circumstances, so it is rational (at least in some important respect) that I attempt to PHI (or immediately intend to PHI). So it looks like there are at least two competing “rational channels” for producing action. Now maybe the relevant evaluative judgment here is a judgment about what best achieves or promotes non-instrumental desires and to be fully appropriate or rational in the relevant sense the input judgment for the enkratic response must be justified *and true*. If so, the rational enkratic response would be rational, or at least fully rational, only by being an indirect response to desires. However, this seems implausible (you have to add ‘and true’ onto the end there to get this result – and I see no non-special-pleading reason to add this to specify what the fully rational response is.
I don’t think that explaining or justifying an acceptable sense (for morality or rationality) in which value (or reasons or rational response) is universal to all or most humans is very hard or interesting. Basic value (benefit, pleasure) experience and basic sorts of rational response are hard-wired into our nature and the rest is just details. After all, as a matter of being animals with at least something of a mental life, we all have the functional capacity to respond rationally to instrumental information with respect to desires (especially immediate aims). What is hard, with respect to morality, is that it seems hard to get from basic practical affect and rational response (the mixed bag of these that concern, among other things, one’s private benefit and negotiating the social world at least partly in competition with others) to something we recognize as morality. It is hard to justify a social code that is not more contingent and more instrumental than we think it needs to be to count as a moral code.
What is needed in order to make progress is: (a) a full accounting of the relevant basic and persistent motivational (affective, conative) elements in human psychology and (b) an account of how, given the right kind of upbringing and propitious-enough social circumstances, we have conclusive reason to be moral (have moral desires, act as morality recommends or requires). In line with what David says, such an account would also allow us to vindicate a core content of morality (or not, in which case we should follow a “schmoral” code not a moral one), the social circumstances that might be required for being moral to be valuable or rational, the specific shape that being moral should have in any given particular set of social circumstances (including the ones that are less-propitious for being moral).
Because a purely instrumental justification of moral rules seems inadequate for the reasons just suggested, I have suggested (in other posts) that we have some kind of basic, persistent desire to follow social rules or certain sorts of social rules (those that constitute respecting others and that tend to achieve social trust consistent with competing considerations). Such desires might including desires to conform to particular rules concerning salient types of social actions (like rules for refraining from resorting to violence against others). In particular, I have suggested that the content of such basic “type-of-rule-following” desires, or some of them, has the form of a rule-schema, so that how the rule is best filled in is specified by social conditions and relevant competing non-instrumental desires that are either part of the basic set or justified on their basis. In this way, I hope, we might explain the strict rule-following (or strong reasons to follow rules) that seems to be characteristic of morality.
I have also taken on, as part of the conception of morality that I think we all share, the idea that, at least in certain important contexts or for certain important purposes (from a certain “standpoint” perhaps), we non-instrumentally care about and seek to promote the well-being of all agents (or all agents within a group, population, or community) impartially, where this include no special place even for one’s own well-being. Since this idea of impartiality is sufficiently weak – it might well be of no consideration in one’s decision-making and planning in some contexts, it might sometimes, even when the relevant sorts of things are at stake, be overridden by partial concerns – it seems to be the right way to take impartiality on board as part of our concept of morality. In this way, I hope, we might take on impartiality in a way that is not immune from plausible explanation in terms of reasons that are (or value that is) ultimately reasons (or value) for particular agents.
Anyhow, this is where my broadly Humean approach (including how it might contain or yield moral reasons or justification for being moral) is at right now. Because my approach is consistent with desires giving rise to reasons for action (in the relevant fact-relative or objective sense) only if they are good desires to have, the universality problem is not so pressing. The features of morality that cry out for explanation, on my view, are impartiality (including impartial value) and strong reasons to follow rules (or reasons to strictly follow rules). Though the content of the rules is important too, I think it is the strictness that is the real mystery. What makes these problems hard, though, is not (my sort of) Humeanism. It is the idea that we need to explain these important normative features (or explain them away, in favor of weaker but similar ones) in terms of some prior set of ends or reasons or elements of practical rationality (whether or not these are in some sense provided by the affective or conative features of our psychology) that does not obviously contain these puzzling features (reasons) and contains elements (reasons) that strongly compete with those features (reasons).
(a) I agree with just about everything you say in your response to me, David. The focus of my comments was what I took to be the most important explanatory question: “What are these agent-neutral (or necessarily universal) reasons, particularly what are those of morality like, and how do they relate to the fundamentally agent-centered reality of what reasons are?” Putting the explanatory question in this way does put the defender of basic agent-neutral (necessarily universal) reasons out in the cold, but that is deserved.
(b) To focus more squarely on your main concerns, I don’t think there is anything wrong with starting with an intuitive conception of morality and then asking what reasons we have and whether it supports that conception. Maybe you agree with that. But I think we should be careful and humble in putting forth such conceptions. And mindful that the actual, true content of morality – not just its normative oomph – is provided by reasons (and these are ultimately centered on particular agents). The essential point, I guess, is that morality, unlike etiquette or chess, is individuated – its content is specified – by our reasons. So to really get the content of morality right in a definitive way, we have to keep coming back to reasons. (I’m not sure that this is precisely what you say, but it is in the same ballpark.)
(c) Going the other direction, I think it is obvious that we have non-instrumental reasons to relate to or treat others in ways that count as moral (rough gloss, we have reason to achieve or promote broadly pro-social relationships and outcomes with any given other human being and publicly or in groups or communities with others). So it is silly to start out your theory of reasons and rationality without moral reasons already being there in some form (however general).
I think the desirable end-states associated with basic moral reasons (and actions and other responses) are quite general (think general obligation schemata that are necessary constituents of being in the moral relationship with others). These would get filled in (the specific actions would get filled in) by our entire context of practical reasons and by how the instrumentally relevant bits of the world are. Perhaps we will not find what we are looking for here if we are looking for the specific moral demands or values that seem intuitive to us (or that get specified in general theories that we are attached to), but if so that is our own fault.
(It should be obvious that I don’t endorse anything like the strong normative overdetermination thesis that Schroeder does. Without basic pro-social, moral reasons – and specifically without general reasons to respect others in particular ways (that I interpret as general interpersonal obligation action-schemata) – I don’t think we get familiar morality. This kind of view requires as distinct – and sophisticated or sensitive – response to others as one has to evidence in belief-formation or to rankings of relevant desired or desirable outcomes in making practical decisions. Which is to say that morality involves something like its own form of rationality or rational response.)
(d) To answer your specific question: I hope that my kind of view would vindicate (and explain and justify) a morality with inherent-dignity-respecting elements (a la Kant), related interpersonal obligation schemata governing action-types along with some ordering of them (a la Ross), reasons to promote the well-being of all equally or impartially (a la standard impartial or “agent-neutral” consequentialism) – and much more besides corresponding to other elements of moral practice (such as reasons for making claims against each other, expressing moral outrage, etc.). I very much doubt that my sort of view would vindicate anything like purely impartial consequentialism or quite the priority or importance (or Kantian explanation perhaps) that deontologists give to following the rules that constitute (or realize) specific interpersonal obligations. It would try to explain things like the scope and strength of reasons for being impartial with respect to promoting the well-being of everyone (or each person in a community) with one’s own well-being having no special place and what it is for things to be just-plain-valuable (or valuable in a way that is indifferent between agents and their particular profiles of reasons).
(e) In addition to the error of taking intuitions about the content of morality too seriously (and in so doing treating the content of morality as something distinct from what, generically, we do or do not have reason to do), there is the related error of over-generalization. Even if it were correct that the content of morality is not individuated by our reasons, a good purely-intuitive approach should yield a “mixed” view with something like the broadly deontological and consequentialist elements described above. Perhaps the content-of-morality error is compounded with the drive to come up with a unified explanation is what leads to this kind of over-generalization.
(f) It is unclear to me (and this is just my ignorance) how virtue theory help with any of this (or fits in with it). Is the idea that it is less prone to the error of making specific, inflexible, intuitively-justified assumptions about what the content of morality must be (or that it must be, in some sense, radically distinct from other reasons for action, as on the duality-of-practical-reason picture)?
I think I understand and agree with most of that up to (d), where I’m not quite sure how to understand your claim. In one sense, I don’t see how your kind of approach could conceivably vindicate Kantian respect, Rossian obligations, or consequentialism. All three views require that we recognize basic reasons that are not agent-relative, and that is what (I thought) you agree with me in rejecting. Of course, you might get something that looks a lot like Kantian respect in its content, but I take that to be a rather different thing; after all, even Rand endorses something like Kantian respect insofar as she maintains that every human being is an end in himself, that the rationale of rights is to protect and preserve that status, and that we should respect one another’s rights; but Randians and Kantians both understand that Rand is not a Kantian. A similar point holds for Rossian obligations and consequentialist considerations. You may be able to support the same sorts of action, but I don’t see that you can support the same conception of why those actions are reasonable. I’m also not convinced that an agent-relative basis would generate a moral theory with the same content as an agent-neutral one — though I set no limits on the ingenuity of consequentialists to produce prima facie plausible justifications of anything.
In any case, if you mean to say that your approach can support reasons to do broadly the same kinds of things, then I think that’s right, or at least plausible enough. But if you mean to say that it can endorse the same kinds of reasons or the same conception of why we have reasons to do those things, then I don’t see how that’s supposed to work.
To my mind, one of the strengths of agent-relative approaches is that they are inconsistent with consequentialism. That’s not the only or main reason to endorse an agent-relative conception of reasons, but it’s a bonus, and not just because consequentialism tends to offer wildly counter-intuitive recommendations, but because it seems beset with all kinds of awkward difficulties that can be ironed out within the constraints of the theory but not without the appearance of grasping at anything that will save the theory. I think of consequentialism on the model of a Kuhnian scientific paradigm like geocentrism; its adherents were able to make the theory empirically consistent, but only at the cost of ad hoc hypotheses with no warrant beyond their apparent ability to address particular problems with the theory. If there were no other suitable alternative, then it might be reasonable to stick with consequentialism and work on patching it up. But there are reasonable alternatives.
For what it’s worth, I don’t think “virtue theory” has anything to say about these issues, and I’m not happy with the “virtue ethics” label at all. Aristotelians and other virtue ethicists of course have plenty of insightful things to say about virtues. But I think what is distinctive and important about the Aristotelian tradition is its eudaimonism, and eudaimonism does have plenty to say about these issues, because, understood in one way, it is a view about precisely these issues: it combines an agent-relative welfarist theory of practical rationality with an objective perfectionist theory of the good that gives an important role to non-instrumentally benevolent regard for others. Virtues are important, but particular accounts of the virtues need to be justified and explained, and that will require appeal to their role in well-being.
LikeLiked by 1 person
Thanks for a stimulating and interesting post, David. (It also looks like it was a good deal of work to produce.) I am in sympathy with your project. I have noticed that contemporary philosophers have a strange way of talking about the claims of “morality,” as though we could somehow determine the content of those claims and then ask, as a completely independent question, why we should care about them. Surely that signals some sort of problem.
But I wonder about the notion that moral realism implies that morals must be rationally motivated. By this appeal to rationality, you seem to mean practical reason (this is explicit in your reply to Michael), as though morals should somehow be derivable from practical reason. If this is the idea, then I have a couple of comments.
First, just as a logical point, the claim seems to be false. It is true, of course, that if some course of action A is recommended by morality, then, assuming moral realism, you have reason to do A. Morality just is a set of things you ought to do, and if moral realism is true, then you really ought to do what morality dictates. So you have a reason to do A, and the reason is that morality dictates it. Compare: If it would be good for you to do A, then you have reason to do A; namely, because it would be good for you.
But none of this requires that what morality dictates be determined by practical reason. Morality might dictate A just because it does. It might just be a basic fact of reality that morality dictates A. If so, this is not derivable from practical reason. Nevertheless, moral realism would still be true.
As I say, this is meant to be only a simple point of logic. Moral realism does not require that morality be rational.
It seems to me that there are some current views of morality that are realist but that hold that moral values are just basic rather than derivable from any reasons. Mark Johnston, in “The Authority of Affect,” claims that there is a primitive value structure in nature which we are in cognitive contact with through sense-perception. (!) That is, you can just see perceptually that certain things are beautiful or ugly, noble or base, inspiring or disgusting, and these aren’t subjective impressions but cognitions of properties that are really there. Again, I think Jonathan Dancy has a theory that there is a structure of practical reasons (he doesn’t talk about morals specifically) that really pervades reality and that is basic (though I could be wrong, and I’m afraid I can’t say much more about it). His book is Practical Reality. Finally, there’s intuitionism, à la Moore and I guess Huemer, Here the idea is that we have direct intuition of the moral order, which is real but not discerned through the deliberations of practical reason.
Second, aren’t even certain elements of an Aristotelian approach to determining the human good independent of practical reason? I am thinking of a sort of ergon argument that would say something like, “The characteristic human functions are the set X, so the human good consists in performing X well.” And what is good, of course, we have prima facie reason to pursue. But the reasons given why X is good are not practical reasons, they’re epistemic. The appeal is to empirical facts about human functions. Similarly, we can argue, “The characteristic bodily functions are the set X, so the health of the body consists in its performing X well.” Now we have said something about the bodily good, but not by appeal to practical reason.
LikeLiked by 1 person
(i) I think David P’s first, logical point is correct as far as it goes. Since moral reasons are objective normative reasons, they only have anything essential to do with rationality insofar as objective normative reasons have something essential to do with morality. I think that they do: facts are objective normative reasons only because, if you were to grasp the fact, it would be an input into a process of good rationalizing response (terminating, in the case of practical reason, with an immediate intention or intention to perform an action that one can perform directly). Maybe this is wrong. Maybe value is entirely independent of rationality and have objective normative reasons (to promote the valuable states) associated with them regardless of whether we have any practical rational capacities at all (and regardless of whether there are any good rules of practical reason to follow).
But even if this were true about value (substitute ‘what morality dictates’ or ‘moral obligation’ if you like), there would still be the question of whether we would be acting rationally in, say, grasping that the valuable thing is valuable and taking its promotion or frustration into account (relative to one’s options and relative to how much reason one has to take competing options) in deciding what to do. So, at least given that we have the right kinds of practically rational capacities, it does seem that there are characteristic rational responses associated with, say, there being things that morality “dictates” that we do. We might, then, in a theory-neutral way, ask about how these associated rationalizing reasons (once one has grasped the “dictates” of morality) fit in with the other rationalizing reasons (and patterns of good practical rationalization) that we have.
(ii) This might more directly address that other David’s response to your point here, but whether good patterns of practical rationalization are purely instrumental (and, if they are, how this is specified – decision theory or something else?) is a distinct question – though, as I have described it above, the good rationalizing pattern associated with grasping what morality “dictates” is instrumental in something like the decision-theory way (and this way of describing things seems perfectly natural).
Non-instrumental desires might be appropriately evaluated (say, in raising children, evaluating how good a person you are, training yourself to be a better person) in just the same broadly instrumental decision-theory-type way as an action can be. Though we don’t explicitly reason about what to non-instrumentally desire or value (we do not seem to be equipped with good rationalizing patterns that take prior non-instrumental desires and relevant instrumental information as inputs and spit out further non-instrumental desires – or even actions that directly give rise to further non-instrumental desires – as “conclusions”) this does not mean that the apt evaluation of non-instrumental desires (say, of a non-basic sort) is not in terms of prior (say, basic) non-instrumental desires and the conditions created by the rationalizing responses to them that we exhibit.
I’m still not buying the logical claim. You write “Maybe value is entirely independent of rationality and [we] have objective normative reasons (to promote the valuable states) associated with them regardless of whether we have any practical rational capacities at all (and regardless of whether there are any good rules of practical reason to follow).”
I don’t see how this makes sense except on some sort of mystical reification of “objective normative reasons.” What is a reason? It is a consideration that counts in favor of doing something (acting, deciding, inferring, concluding, whatever). That’s it. How could it make any logical sense to say that there might be considerations that count in favor of our promoting “valuable states” even if we do not have the capacity to consider what counts in favor of doing anything or even if there are no considerations that genuinely tell in favor of doing anything? If that’s not what having practical rational capacities comes down to, or what there being good rules of practical reason to follow comes down to (I’m not sure we should build the notion of rules in as an essential component, but let’s grant it for now), or what objective normative reasons come down to — then what more is there to objective normative reasons and the capacity for practical rationality?
Perhaps you’re struck by the thought that it is logically coherent to maintain that, say, some things are just objectively and impersonally good states of the world regardless of whether anyone has reason to promote them? I’m not sure that’s really coherent, but even if it is, it doesn’t get us to morality in anything like the way that term is normally used. Objectively, impersonal good states might be good, but if we have no reason to promote them then they don’t generate any moral requirements at all. Maybe some people want to call themselves “moral realists” simply because they believe in impersonal and objective goodness, but I don’t think that’s sufficient. “Morality” is a broad term, but it simply loses its meaning if it loses its connection to what we ought to do.
Maybe my view will be clearer if I add that I do not think the content of morality has to be specified by appealing to practical rationality in a way that reduces morality to practical rationality. Quite the contrary; I think practical rationality itself can’t be properly understood unless we bring “moral” considerations in. My claim is rather that (i) moral judgments are true by virtue of their connection to what we have reason to do; (ii) if (i), then we cannot know whether a given moral judgment or theory is true unless we know whether we have reason to do what it commends; (iii) therefore the content of morality cannot be specified independently of what we have reason to do.
I hope that helps some? Or perhaps I’m misunderstanding something about the cases you (and DP) are thinking of as illustrations of the logical point that I’m resisting.
For what it’s worth, I find your claim that “we don’t explicitly reason about what to non-instrumentally desire or value” plainly false. If that were true, much moral philosophy simply wouldn’t exist. Nor is this just something that philosophers do; people with the luxury to think about what to do with their lives do it all the time. Have a look at David Schmidtz on what he calls “maieutic ends” in his paper “Choosing Ends.” He shows pretty convincingly, to my mind, that one doesn’t have to embrace a more substantive conception of practical reason such as I’m inclined to accept in order to see that we can and do reason about our ends and that we can do so more and less rationally.
LikeLiked by 1 person
David R. – The idea behind the logical claim would be that we can separate (a) the objective, instrumental value of an action from the agent (b) having any rationalizing reason to take the action and even (c) having any idealized rationalizing reason to take the action (if he knows the relevant instrumental facts). However, in order to see how to get (a) and (c) to separate, you need to imagine folks without whatever the relevant instrumental-rationality capacities are. Maybe that’s a bit rare. Unless this is the case, at least with regard to practical reasons, the instrumental-value property always comes along with the ideal-rationalizing-reason property. I address this distinction a bit in my response to Irfan.
The distinction helps, and things get interesting, when there is valuable (for one) state of one that one cannot “get to” directly via reasoning (inference, rational response). In this case, there is no idealized rationalizing reason to be in that state. This is what is behind my idea that we cannot literally reason our way to non-instrumental desires. All we can do – I think – is reason our way to being in states that tend to make us have good non-instrumental desires. We can use this distinction to solve puzzles like “the toxin puzzle.” If I say I’ll give you a million dollars if you intend to drink a toxin that will make you pretty darn sick (even if you end up not following through on your intention), do you have reason (sufficient reason, conclusive reason) to drink the toxin. We might say that my having this intention would be valuable even though it is irrational (and in an ideal sense, since I am fully informed of relevant facts) for me to intend to drink the toxin. This irrationality reflect the sorts of rational rules I have that terminate in intentions. These rules are sensitive to the desirability of the action, not that of the intending state. Perhaps if we evolved to face these sorts of situations, we could rationalize our way to such valuable intentions. But we can’t. If I want to get the million bucks, I have to irrationally intend to drink the toxin. Or, if the motivations associated the normal way of rationalizing intentions make it impossible for me to thus intend, I need to somehow manipulate (or have someone else manipulate) the situation, perhaps by corrupting the information that I have, such that I intend to drink the toxin.
This is a crazy puzzle – and it certainly bring up other issues and there is not agreement about which are most important – but I think the important one is that of what types of rationalizing rules or procedures or processes we have the capacity to instantiate. It seems right the that our instrumental reasoning capacities do not allow things like having (believed-to-be) valuable intentions in contravention of the desirability of the action or valuable beliefs in contravention of whether the belief is at all supported by the evidence one has. In general, knowledge of the instrumental value of a non-action response does not allow for the direct rationalization of such a non-action response.
I’ve considered the idea that we have rationalizing capacities that take us from non-instrumental desires to further non-instrumental desires. It would be really cool if we did, but I don’t think we do. What we do, I think, is deliberate about the desirability of caring (or caring more than we do) about things in non-instrumental ways – about, perhaps, being a slightly different person than we are presently. This is theoretical reasoning (though the content is evaluative). The relevant conclusions are that doing things that make it more likely that our non-instrumental desires will change for the better is advisable. If we then do those things, and if we are correct in our judgement about what causes what (and in the change in desires being good or beneficial), then it is likely that our profile of non-instrumental desires changes for the better. We have indeed used our rational capacities (and in particular our capacities for explicit deliberation) to get better desires. It is just that the better desires are not themselves conclusions of any process of reasoning (rational response, etc.).
I’m afraid I don’t follow much of that. Here’s two questions that might help. First, what do you have in mind by ‘rationalizing reasons’? I understand the distinction between normative and explanatory/motivating reasons; are rationalizing reasons normative reasons, explanatory/motivating reasons, or something else? Second, what is the toxin puzzle supposed to be? Why exactly would it be irrational to (intend to) drink the toxin? One might think that if it’s really a good thing to get a million bucks despite being sick for a while, then it isn’t irrational to drink the toxin. One might also think that it is irrational to drink the toxin because even though getting a million bucks would have some value, its value is not sufficient to give us good reason to pursue it at the expense of being severely ill. But that sort of judgment wouldn’t pose any problems for the view I’ve been taking, since it simply amounts to saying that we do not have sufficient reason to pursue some genuinely good things in certain circumstances, viz. those in which pursuing those things would do more harm than good or otherwise violate some important norm — but that’s just an everyday fact, and doesn’t show that there are genuinely valuable things that we could rarely or never have any good reason to pursue. But even if it did show that, it wouldn’t quite hit on the issues I’ve been concerned with, which have to do with moral judgments of the form “you ought (not) to / should (not) X,” and not judgments that such-and-such a thing is valuable. As I think I mentioned in an earlier reply, I see no logical difficulty in maintaining that there are objectively good states of affairs that have no bearing on what we have reason to do; it’s just that judgments of the form ‘X is an objectively good state of affairs’ aren’t moral judgments at all, at least not of the sort that I was concerned with in the OP.
I suspect that I’m missing something important in your discussion of the toxin problem, though.
Apologies, David. My feeble “method” for not letting bloggy stuff take up too much time is making (numbered or lettered) points as they occur to me. Especially when not enough background or context is shared, the necessary brevity (and occasional sloppiness) can make for less than a real meeting of minds!
(1) Rationalizing reasons are sometimes called subjective or belief-relative reasons: they are the contents of beliefs that one has, on the basis of which one reaches a conclusion with at least some degree of rationality (determined by according to a good rule or pattern). They are opposed to objective normative reasons. For example, if in fact setting the toaster on a lower setting will prevent my toast from getting burned, and if this would be a bad thing for me, I have objective normative reason (the fact that the lower setting will prevent the burning) to put the toaster at a lower setting. I also have rationalizing reason (the proposition or content ) if I have the belief that the lower setting will prevent the burning. I would have the same rationalizing reason (to turn the setting on the toaster lower) if I believed this falsely. At least paradigmatically, morality is concerned with objective normative reasons (as you point out, it is inherently normative in a way that etiquette is not). We can make the same distinction in reasons for belief (though talk of reasons for belief almost always concerns rationalizing reasons; the prevalence of the different sorts of reasons-talk in morality vs. practical reason vs. belief-formation is something that a good theory of value and rationality or reasons and rationality would explain).
(2) What you are told (and reasonably believe) is of value in the toxin puzzle (due to Gregory Kavka) is exhibiting the psychological state of *intending* to drink the toxin. Whether you actually drink it is immaterial to getting the reward. Some people think it is psychologically impossible to have this intention if you know that actually following through is immaterial for getting the reward. Maybe this is true, maybe it is not, but I think the important thing is that the reason why it is hard or impossible to form such an intention is because you get to have intentions via following relevant sorts of rationalizing rules and we get intentions as rationalized outputs only relative to the beliefs about the value of the actions intended (or desires to perform the actions), not relative to the same sort of things concerning the intentions themselves. This is the important lesson to take because it addresses important issues in the nature (and limits) of practical rationality. This case (and others like it) is not just a head-scatcher about what mental states do or do not count as intentions, whether it is psychologically possible in such situations to intend to PHI on the basis of the value to the agent of having the intention, etc. But yeah, it would be rational to *drink* the toxin (and hence *intend* to drink the toxin) if you were getting rewarded for that.
(3) If everyone morally ought to or is morally obligated to PHI, this would seem to be a fact-relative not belief-relative sort of normative thing (a function of objective normative reasons, not rationalizing or subjective normative reasons). If – and I think you are right, here – moral normative concepts are genuinely normative, then if A is morally obligated to PHI it must be that A has conclusive objective normative reason (perhaps of a certain “moral” sort or flavor) to PHI. (This, not the way I interpreted objective normative reasons in terms of value, is what is essential here.) And if this is so, there is broad agreement that A (necessarily, in some sense of ‘necessarily’) would have conclusive rationalizing or subjective normative reason to PHI *if relevant perhaps-counterfactual conditions in him/her were met*. (In standard practical cases like my turning-down-the-toaster case, this additional condition is A coming to know relevant instrumental facts. It is less clear that it is such knowledge, or only such knowledge, that you need to add to the objective normativity to get to the rationalizing normativity in the case of moral obligations.) My spin on David P’s logical point is that the ‘necessarily’ in the above conditional statement applies only relative to the relevant practically rationalizing capacities of the sort of agent one is generalizing over. Take away the relevant capacities for practically rationalizing – rationally justifying – intentions (and thereby actions) and you falsify the conditional. This comes to the ‘necessarily’ not being logical or conceptual and the objective normative reasons of moral obligation being logically distinct from the corresponding rationalizing reasons. Maybe we’d have to be insects or something in order for these two things *actually* to come apart in the standard cases involving basic instrumental rationalizing capacities.
(4) However, even if this version of something like David P.’s logical point holds, it should not make much of a difference to your project. There is no reason to think that, for creatures like us, there is no “rational path” from the objective normative reasons of morality to the corresponding rationalizing reasons of practical rationality (say, by idealizing one’s present landscape of rationalizing reasons to the ones that one would have if one had the right desires, right knowledge or both). The question is how. Typical conceptions or theories of morality and practical rationality do not seem to square well enough to explain the connection in a satisfactory way. Your description of how we should (and should not) be thinking about this problem seems to me spot-on. Folks are too simplistic and too confident in their conceptions of morality and not sufficiently cognizant that the full vindication of any conception of morality will have to be based on a general theory of practical reasons and rationality. We need (at least) a theory of practical reason (and, I think, a theory of value) that tells us how to evaluate desires or ends as well as actions.
In response to (the other) David, I think the answer to the question ‘aren’t certain elements of an Aristotelian approach to determining the human good independent of practical reason?’ is ‘no.’ Of course, if we begin by adopting certain conceptions of practical rationality, it may look that way; if we suppose that practical rationality is, for instance, just instrumental rationality, reasoning about the means to whatever ends we happen to have, then the ergon argument will look like an appeal to something independent of practical reason. But I don’t think that’s how Aristotle thinks of practical reason, or how we should think of it. A stock objection to Aristotelian perfectionism is that we could identify precisely what it is to be a ‘good specimen’ of humanity and yet, for all that, show nothing about what anyone has reason to do. But whether that objection succeeds depends on how we understand what it is to be a ‘good specimen’ and how we understand practical rationality. As I understand the Aristotelian approach, considerations about good human functioning identify not simply the minimal threshold that one has to meet in order to count as not lacking some standard human capacity, but what it is to exercise those capacities well, where ‘well’ is assessed by standards internal to those capacities. Similarly, considerations of practical reason are considerations not simply about what would satisfy the desires I happen to have, but also about what desires it is good for me to have. ‘Good’ here need not be understood in a circular or question-begging way as ‘whatever leads me to exercise my essential capacities well,’ but in terms of basically formal criteria of finality and self-sufficiency. One of the strengths of Badhwar’s book, to my mind, is that she argues convincingly that subjective accounts of well-being cannot meet those criteria, at least not as well as objective accounts do. I can’t argue for it here, but in a nutshell what I’d say is that from an Aristotelian point of view the aspiration for objective goodness is built in to practical reason even when the latter is understood primarily in formal terms. If Aristotelians are right, when we discover what good human functioning is, we discover facts about ourselves, and these facts cannot be irrelevant to what we regard as worth pursuing. I’d happily concede, however, that not all self-proclaimed Aristotelians have given accounts of good human functioning or practical reason on which the connection between the two is preserved; there really are some conceptions of good human functioning, not obviously incoherent, on which it would remain an open question why I have any reason at all to value good human functioning. I don’t think Aristotle’s — or most of the most influential Aristotelians’, for that matter — is one of those, but defending that claim would take me more time and space than I can afford right now.
I’d say a similar thing about other conceptions of moral realism. So far as I can see, either a given theory posits some alleged moral facts that necessarily give us reason to act in some way or another — not necessarily decisive or over-riding reason, as I tried to emphasize in the original post, but some reason, some consideration that genuinely tells in favor of acting one way or another — or the facts posited are not really moral. You say that it might be a basic fact of reality that morality dictates A. But what would it be to “dictate” something if not to give a reason? Nothing, so far as I can see. You say that whether morality dictates A might not be “derivable from practical reason”; again, if we’re beginning with a conception of practical reason as instrumental or the like, then I’d agree, but I don’t think anything important follows here. If practical reason is in fact purely instrumental, then the supposed moral facts that allegedly dictate something as a basic fact of reality do not in fact dictate anything at all; if morality dictates something as a basic fact of reality, then practical reason cannot be purely instrumental, but must extend to recognize the reasons that have their source in whatever it is about morality that allows us to say that it dictates things.
Moore is perhaps a good example to illustrate what I mean. Moore thinks there are just sui generis moral facts to the effect that such-and-such is good, and not good for anyone, but just good simpliciter. But he also thinks that reason can and does recognize these facts — though admittedly only, at the most fundamental level, through a kind of intuition — and that a practically reasonable person takes these facts as considerations that guide his actions. I don’t know Moore well enough to know whether he posits a kind of dualism of practical reason, but what I recall of Moore would simply make no sense if he did not take it that my identification of some state of affairs as objectively and impersonally good gives me a reason to favor it, and that failure to recognize that consideration is a failure of practical reason.
I would class Moore and others like him in the group of folks who recognize agent-neutral reasons. My objections to agent-neutral reasons, as formulated so far, are pretty weak and sketchy, but I take it that (a) views that endorse them satisfy the connection between morality and rationality that I take to be required for any sort of moral realism, and (b) that such views will encounter the difficulties I pose, however inadequately.
Very briefly, in response to Irfan: I think we / quasi-Schroederians can get more mileage out of a non-moralized conception of a normal human psychology than you seem to suppose. But I think the role of habituation in people’s psychology shouldn’t be overlooked, and that some simple empirical considerations tell against the notion that a non-moralized conception of psychologically normality can get us the conclusions that quasi-Schroederians want. Take the Spartans, for instance. The Spartans certainly had a moral code, a famously severe and strict one. But they also brutally enslaved and subjugated whole populations of people in ways that hardly anybody today would regard as just or “moral.” Were they psychologically normal? I don’t know (see Jean-Pierre Vernant, ‘Between Shame and Glory: the Identity of the Young Spartan Warrior’, Mortals and Immortals, Princeton 1991). I suppose a good case could be made for their not being so, but I’m not at all confident that it could be successful without smuggling in moralized premises. Nonetheless, I think we could in principle probably arrive at a conception of normal psychology by statistical means and then show that psychologically normal people have instrumental reasons to embrace some sort of morality. But I doubt we could arrive at a highly determinate account of the morality that all such people have instrumental reasons to accept; I’m virtually certain that we would not end up with an account of morality along Korsgaardian or Singerian lines. I’m not especially sympathetic to either Korsgaard or Singer, but I don’t think we can dismiss their moral theories by appeal to some antecedently established Humean theory of reasons. The theory of rationality needs to take “moral” considerations into account just as much as the theory of morality needs to take into account considerations of what is practically reasonable (yeah, yeah, I know, I’m a dirty coherentist…)
I hope that helps to clarify things. David said that it looked like my post took a good deal of work to produce, but really I’m just vomiting forth various thoughts that I’ve had over and over again but never had a chance to organize, and most of them can be understood as reactions to David Brink inspired by Philippa Foot. If I were to put a really great deal of work into it, maybe I’d get somewhere they’ve never been. For now, though, I’ll be satisfied if I can get straight on these matters.
I could be wrong, but this seems to commit the very fallacy I worried about in the third and fourth paragraphs of my comment, namely conflating reasons for the good with reasons that follow from the good. Yes, of course, it is a failure of practical reason not to favor the good once identified. But the question is whether it is necessarily a matter for practical reason to identify the good. And you seem to admit that it isn’t. For Moore, the good is identified by intuition. You hedge this a bit, saying intuition comes into play “at the most fundamental level,” with reason operating elsewhere. But if this hedge seems important (it doesn’t to me), we can easily imagine a Moore* who regards the good as entirely known by a special intuition. In this case we have identified the basis of morality, on a realist conception, without appeal to reason.
This may also be committing the above fallacy.
I mean it might be a basic fact that it is morally obligatory to A. I don’t think there’s anything fundamentally more to say about “what it [sc., moral obligation] would be.” We either understand the general notion of an obligation, a norm, an ought, a should, or whatever we call it, or we don’t. If we don’t, I don’t see what “reasons” we would be able to recognize that would enable us to understand it.
It seems to me that not only is it easy to see how moral realism could be true without morals being based on practical reason, but this is the most natural view. It seems to me that if we want to be moral realists and we are fortunate, it will turn out that there is an objectively discoverable human good, which is a matter of fact. We would then expect to discover this human good by one or more of the means by which we discover other objective facts—basically by reasoning from sense-perception. It could go the way I indicated in my comment for the Aristotelian view. But this would be a matter of theoretical reason—scientific reason—not practical reason. And this seems to me to be Aristotle’s own view. He constantly insists that we deliberate about means, never ends. I take it he means that discernment of proper ends is a theoretical matter. But of course I’m hardly an Aristotle scholar.
You can redefine “practical reason,” if you wish, to mean (or include) theoretical reason about ends. But this doesn’t accomplish anything material. And it still wouldn’t work, because there’s the possibility (however remote) that we discern the good directly and without reason, by sense-perception à la Johnston or by special intuition à la Moore.
It might help if I understood better what you conceive the role of reasons to be in generating morals. Do you have something special in mind? Put it this way: Is there some special brand of reason you think is needed to establish morals, or is it only important that it be reason? If the former, what is the special brand? Or again, why do moral facts have to be discerned by reasons when there are other objective facts that don’t have to be discerned by reasons?
Ok, I think I see where our disagreement lies, and where I’m happy to agree with you. What you’re objecting to, I think, is the notion that moral realists must suppose that practical reasoning generates morality or that morality is based on practical reasoning. I’d agree that moral realists need not suppose any such thing; indeed, I’m not sure anyone who embraces such a view is a moral realist after all. That depends on what kind of “generating” and “basing” we’re talking about. Much of your comment seems epistemological in emphasis; do we find out about the good or morality or what not via practical reasoning or theoretical reasoning? But at some points you also seem to have in mind a view about the ontology of moral facts on which the moral facts themselves are somehow products of practical reasoning. I think that latter notion might be inconsistent with realism; certainly mainly people who endorse it take themselves to be at odds with realism. But I’m not sure that matters too much for my purposes, because I’m not really concerned here with whether moral facts are based on practical reasoning in either the ontological or the epistemological sense.
Let’s take Moore as you understand him as an example. Moore thinks there are sui generis, objective, impersonal goods in the world. They are certainly not ontologically dependent on reason, except insofar as some things that are objectively and impersonally good require the existence of rational beings as a condition of their existence. So the moral facts are ontologically prior to reason. At the most basic level, we don’t know these via practical reason, or maybe even by reason at all, but by a special faculty of intuition (my hedge comes from the fact that I’m not sure whether Moore thinks that this intuition is a kind of exercise of the rational faculty, since he seems to suppose that only rational beings can have these intuitions; Plato, on an intuitionist reading, thinks something like this: the intuition of the Form of the Good is a rational intuition and hence an exercise of our rational faculty, though it is not in itself an exercise of or conclusion from any discursive process of reasoning — but it doesn’t matter which way we take it). So our knowledge of the moral facts is prior to, or at least doesn’t depend on, practical reasoning, or perhaps any reasoning at all. None of this poses any logical problem for my claim that realists cannot specify the content of morality independently of the content of what we have reason to do. If these Moorean facts about the impersonal, objective goodness of states of the world are to have any moral significance, then they must give us reasons; if they don’t, then we can call them good all we want but they won’t have any relation to what we ought to do, how we ought to treat each other, etc. But if they do necessarily give us reasons, then in specifying the content of morality Moore is also specifying, in part at least, what we have reason to do. “Not independently of X” doesn’t mean “based on or generated by X.”
As for your distinction between practical and theoretical reason, I don’t think even Aristotle, whose distinction between them is not beyond challenge, distinguishes them in the way you do. Nor do I think he would be right to. Practical reasoning is thinking about what to do. So if identifying objective goods helps us to answer questions about what we should do, then thinking about them counts as as practical reasoning. This isn’t to collapse the distinction; it may be that discovering the basic features of human well-being is strictly a matter for empirical theorizing that itself has no practical dimension, but drawing on that empirical theorizing in order to identify what to do is practical reasoning. You’re right that Aristotle repeatedly says that we don’t deliberate about ends as such, but about means to ends (though his conception of a ‘means’ is not limited to instrumental means), but that is a claim about deliberation, and deliberation is not the whole of practical reasoning. I take the Nicomachean Ethics itself to be an exercise in practical reasoning as Aristotle conceives it. That’s because he says that ethics (well, strictly, ‘politics’) is a practical and not a theoretical science. But his sense of ‘theoretical’ is rather narrower than ours, so practical science includes a whole lot of what we colloquially describe as ‘theory.’ What makes it practical knowledge is that it has human action as its subject-matter*; inquiry is reasoning; inquiry aimed at practical knowledge is practical reasoning.
But my claims haven’t been meant to give practical reasoning — on any conception of practical reasoning — any kind of epistemic or ontological priority over moral facts. I think that’s the source of our apparent disagreement, no?
* [Addendum]: I should clarify that rather clumsy way of putting it; what makes it practical knowledge according to Aristotle is not simply that it has human action as its subject-matter, but also that it has action, rather than knowledge alone, as its goal. The point of the NE is to help us think about what to do.
Yes, it seems our disagreement was all due to misunderstanding. The main point you wanted to make—that if morality is based on real facts about what one ought to do, then it is incoherent to say that morality requires A but one has no reason to A (or even to wonder what reason one might have to A)—is what I endorsed in the very first paragraph of my comment! I was only objecting to the notion that the morality in question necessarily had to be discovered (or even derived) by practical reason, which you also seemed to me to be saying. (Thus, for example, the statement I lately quoted that, “what would it be to ‘dictate’ something if not to give a reason? Nothing…”) Ah, clarity! Best achieved right from the start, but better late than never.
LikeLiked by 1 person
Yes, I had been wondering how your claim that there was a logical gap could fit with what you said at the beginning of the comment. Makes sense now. I can’t say I see how “what would it mean to ‘dictate’ something if not to give a reason?” suggests that practical reasoning is what discovers or generates whatever it is that gives the reason, but perhaps it’s the ambiguity of ‘give a reason.’ Maybe “to be a source of reasons” would have been better?
Well, I knew I’d run into this kind of problem, so thanks for showing me where I can be clearer.
Yes, that would be better. To speak of giving a reason, when the truth is that it’s just a basic fact that something is required, is quite misleading. Suppose I say, “What can it mean for ‘Thou shalt not lie’ to be a basic moral fact if not to give a reason for not lying?” It would be natural to respond, “If it’s just a basic moral fact, then it doesn’t have a reason!” The reason for this is that “reasons” usually appeal to something further, something that rationalizes the claim in question.
Consider this paragraph:
All the talk of reasons here is misleading in the way I’m talking about. It’s odd and misleading to speak of claims about “what we have reason to do” if all you might mean is claims about the good. And the statements at the end about the “rationality of morality” only contribute to the suspicion that you have an extreme rationalistic conception of morality.
I’m just trying to be helpful here. In hindsight, I can see how you intend your statements to be interpreted. And there can be reasons for the locutions you employ, such as to achieve a maximum of generality (i.e., so as not to seem to make your case depend a teleological conception of morals or any other). But if you decide this “reasons” talk is the best way to go, then some words of explanation would be apropos, I think.
Fair enough. I think the analogy with ‘give a reason’ as I intended it would be “What can it mean for ‘thou shalt not lie’ to be a basic moral fact if we do not have good reasons not to lie?” A necessary condition for the truth of ‘you ought not to lie’ is that you have a reason not to lie. Logically, there is nothing incoherent about claiming that there is no further fact that grounds that reason or explains why you have it — you just have it, because that’s what the nature of moral reality is. I can appreciate more clearly now that it would then be odd to say that this brute moral fact gives you a reason, since it allegedly is itself a reason. My basic claim about the necessary connection between morality and reasons still stands, but that’s an odd way to put it if we’re thinking about views like that. I wasn’t thinking about views like that, in part because they seem so silly; a view like Moore’s is a little different, since ‘beauty is intrinsically good’ is not identical to ‘you ought to promote beauty,’ but is allegedly the source of the reason we have to promote beauty; a quasi-Kantian analogue would be ‘people have inherent dignity as ends in themselves’ and ‘you ought not to kill innocent people,’ where the first is supposed to be the source of the reason expressed in the second. But there’s logical space for the notion that ‘you ought not to lie’ has no further explanation whatsoever. It’s logical space inhabited primarily by silly people, but it’s logical space.
So that helps even more than I initially realized.
The ‘reasons’ talk is driven by a few considerations: first, that it’s how a lot of philosophers are formulating their views these days, prominently including some of the philosophers I’m responding to; second, because it seems to hold out some promise for giving an illuminating account of the differences between theories that otherwise look like they can only talk past one another. But of course ‘reason’ and ‘rationality’ suffer from all kinds of ambiguity, so even if both of my reasons (!) for going in for ‘reasons’ talk are good, I still need to do more work to avoid unnecessary misunderstanding and confusion.
Addendum: To be clear myself, the thing about a phrase like, “rationality of morality” is that it’s also ambiguous, between something like personal relevance of morality (your meaning) and logical inferential structure (or even derivability from pure reason) of morality (not your meaning).