Having reread David Estlund’s “Human Nature and the Limits (If Any) of Political Philosophy” (2011) – I had read it or similar material from Estlund years ago – I have some thoughts. Here is one. (For a variety of more shooting-from-the-hip points, of varying quality and level of present endorsement, see David Potts’ more-comprehensive critique Estlund’s article and specifically my comments there (Estlund’s Defense of Ideal Political Theory).)
*****
If agent X being unable to perform action (or carry out plan) A entails that it is not the case that X ought to perform A, why is this? This answer seems plausible: because this rules out A-ing as an option for X in her deliberation and decision-making. If this is right, then why suppose that only the ability/inability binary is relevant? Why not a cut-off in a relevant scalar quantity? Specifically: perhaps if it is unlikely-enough that X will pull off A (maybe or maybe not due to relevant deficits in X’s internal, psychological abilities), then A-ing is not an option for X — and so it is not the case that X ought to A.
One would probably want to add some things to this proposal. First, suppose that the likelihood cut-off varies depending on how important the decision is. For example, because moral decisions are quite important, an especially high degree of unlikelihood might be required (maybe to the point of the agent lacking normal adult capacities, including having clinical conditions concerning impulse-control and the like). In addition to being a natural addition, this condition seems well-positioned to prevent scoundrels from “wiggling out from under” requirement-type moral principles or rules. Second, we need to define the relevant option-set as something distinct from the set of all the things that X is able to do in her decision-context. The suggested criterion for exclusion from the option-set suggests a similar criterion for inclusion. Something a bit like a threshold for expected value or utility. We might motivate this kind of picture (and flesh it out some) by appealing to deliberation functioning in two stages: first, often in the background or implicitly, we determine which notional actions are candidate winners, then, second, more typically using explicit deliberation, we compare or deliberate over the potential winners in order to choose a winner. For the first stage, one ruling-out criterion is sufficient unlikelihood of X pulling off A. Another, it should be noted, is it being normatively required, including morally required, that one refrain from A-ing (either way, A-ing gets booted from the set of actions or plans subject to serious consideration and deliberation, so that it is not the case that X ought to A).
If this view is plausible enough, we need to take it seriously as a candidate view (of how the relevant ought-implies-can relationship work and perhaps how similar relationships work). We would face a real question of which view to take (that ideally would be settled by working toward some kind of explanatory philosophical account, not simply by direct appeal to intuitions). But maybe I’ve missed some strong objections and the proposed view isn’t very plausible after all.
What y’all think?
(The proposed sort of view yields different results from Estlund’s (regarding individual agents being under moral requirements and regarding society being under the moral requirement to achieve maximal social or political justice or whatnot). On this view, though we cannot determine that a proposed moral requirement is null and void simply by noting the unlikelihood of relevant agents pulling off the relevant action or plan (as Estlund wants), this kind of condition actually does important work toward “moral requirement defeat” (as Estlund seems not to want). From the standpoint of the proposed sort of view, Estlund’s core position here, though technically correct (things like the unlikelihood condition do not go all the way to defeating normative requirements on their own), fails, as he wants it to, to let in the full range of moral requirements to perform actions that the agent is highly unlikely to pull off (perhaps because it is super-difficult, as a broadly motivational, causal matter, for the agent do pull off the action or plan). If that is right, then there is not any clear need to explicate and defend the normative status of requirements that “will not happen” (requirements to start and complete plans for individual agents, requirements to implement and comply with institutions for society itself) — with the more obviously action-guiding nature of practical proposals that take present conditions into account in mind (such proposals might recommend against starting plans or implementing institutions because, as things are now, disaster will ensue). For independent reasons, one would want to fill in some better way of thinking about our normative position with respect to hard-to-implement plans that have a lot to recommend them (perhaps societal-justice-wise, with the agent being society itself) when first (or second or third or… etc.) steps have immediate risks (perhaps concerning the production of more or less just society-level social arrangements). One would probably need to look at some cases that are at least a little bit like the Professor Procrastinate case, but perhaps absent puzzles about the nature of putatively different sorts of moral requirements and how they relate to each other.)
What I think is that the prospects are not good for extending “cannot implies no ought” to “can hardly implies no ought.” The former derives from a seemingly clear cut and incontestable principle, that it makes no sense to morally require what is impossible. (Though this makes me think of Bernard Williams-type examples where he claims that this does happen and is the basis of some moral dilemmas. For example, Orestes is both morally required to avenge the death of his father and also morally forbidden from matricide.)
But once you relax “cannot” to “can hardly,” the basis for blocking any moral requirement is no longer so clear cut. Suddenly, all sorts of details matter. What degree of difficulty is sufficient to scotch a moral requirement, and how do we begin to quantify this? More importantly, how do we establish it? Thinking of the moral demands made by socialism, for example, it is strongly contested how difficult they are to meet, with some people arguing that in the proper circumstances, they wouldn’t be difficult to meet at all. Again, is the difficulty of meeting a moral requirement due to the requirement being unreasonable or unnatural in some way? (And how do we determine this?) Or is it due to some moral failing in the agent? Again, do benefits arise from attempting to meet a difficult moral requirement even if the attempt fails? If so, why should its difficulty be thought to block it? All these questions depend on the rationale for the given moral requirement, which depends on the moral theory from which the supposed requirement derives, which in turn depends on the conception of human nature and the role and status of moral requirements that underlie the moral theory. So, I don’t see much hope for a general rule against difficult moral requirements.
LikeLiked by 1 person
I’ve figured out that the proposal I’m considering here is not really that plausible. Grant that the unlikelihood of X pulling off some option A can diminish, down to some infinitesimal amount, the magnitude of the valence (or expected utility) attached to that option. But X still ought to A if the value of A-ing is positive but the value of all the other options is negative. So much for any general ought-blocking powers.
What about normative-requirement-blocking powers? Suppose that non-A-ing options (types) are normatively ruled out for X as a general matter (presumably due, at least in large part, to high negative valence being attached; this is one way of thinking about X being under a general normative requirement to A). So, in some particular decision situation, the non-A-ing options (tokens) are normatively ruled out, leaving (the token) A-ing as the only option left standing (whether the value of the token A-ing is positive or negative). Now suppose it is super-unlikely that X will pull off (the token) A-ing. What would be necessary, in order to plausibly get the (token) A-ing normatively ruled out (and maybe some non-A-ing token options ruled back in?) is the unlikelihood of X pulling off (token) A-ing making the (token) A-ing have a very high negative expected utility. But this is not the sort of normative work that unlikelihood does — it takes the value, positive or negative, of an option, and diminishes it (in the expected utility relationship).
And that last point is really the essential one. Before we can even worry about whether normative ruling-out does the same work that causal ruling-out does, the proposal under consideration needs to show that the unlikelihood does the normative-ruling-out work. But it does not. I was confused in thinking that it could.
For sure, dramatically diminishing (positive or negative) the expected utility of an option can affect what the agent ought to do. And maybe even what the agent is required to do (morally or otherwise). There might even be special contexts of ought-blocking or requirement-blocking. But these would seem to be quite-special contexts (and not contexts that are terribly important in our practical life).
LikeLike
Thanks, David. That is helpful.
I don’t think the approach I’m suggesting would yield anything like a general rule against moral requirements that are difficult to meet (or otherwise unlikely to be met). More like: that it is super-unlikely that the agent will be able to pull off what she putatively ought to do is strong evidence that (and could go into making it the case that) it is not the case that the agent ought to do the thing. You are right that specifying when unlikelihood blocks the ought in particular cases (and when it does not) would be something of an involved matter. But maybe that is just how things are (we cannot rule out non-simple ought-blocking conditions a priori).
Part of my reason for thinking that this idea is plausible enough for serious consideration is formal and explanatory: if the question is what is not on an agent’s slate of options for a decision, though it is clear that things she cannot do are ruled out, why aren’t other things ruled out as well? But I think the bigger reason is intuitions in particular cases.
First are pairs of cases where, in the first case X cannot A and in the second case X’s best efforts would yield only an infinitesimal chance of success in pulling off A-ing. When considering such case-pairs, it seems that, in each case, it is not the case that X ought to A (though it also seems that it is only in the first case that it could not be that X ought to A — as should be expected if other conditions are required to get the ought-blocking work done). If I add in that the decision-context is literally all-important (e.g., A-ing, however unlikely, is X’s only shot at saving herself from a painful death), it seems that in the second case (but not the first) X ought to A (so the ought-blocking is nullified, so to speak).
Second, consider garden-variety practical cases. Suppose I need to choose between staying at home and watching TV and going out to see a basketball game with you. Both options are good ones, but I dawdle and, by the time I’m ready to decide, it is highly unlikely that we can make the game. And so it is probably not the case that I ought to (attempt to) go to the game with you. To anticipate an important wrinkle: it might seem that the work here is done by the opportunity cost of trying but failing to go to the game (not by the unlikelihood of success). So we should rejigger the case to screen this off. Suppose, then, that it is pleasant for me to ride in the car with you, just enough so that the opportunity cost of failure is zero. It still seems that it is not the case that I ought to go to the game with you (or attempt to), at least if success is unlikely enough.
(An aside: perhaps we can generalize in this way about unlikelihood, including internal barriers to ability: all else equal and in particular controlling for opportunity cost, that one is less like to pull off an option makes for one having less reason to take that option. In this way, the likelihood or unlikelihood of pulling off an action might always have some direct normative upshot with respect to the degree and direction of normative pressure bearing on that action.)
LikeLike