This is a paper draft (still a bit drafty; helpful to have more of the context of debate, but hopefully the key points are accessible on their own; comments welcome). The actual working title is below (not the playful title of this post).
*****
EXPLAINING REQUIREMENTS
(i) oughts and requirements and the PL model
(ii) the PL model: dig it!
(iii) how extant “two kinds of reasons” reduction strategies fail
(iv) a better strategy: put fitting-attitudes meat on the PL model
(v) meeting Snedegar’s challenge: explaining the covariation (easy cases)
(vi) “going structural” to tackle the hard cases (morality)
(vii) conclusion
I. OUGHTS AND REQUIREMENTS AND THE PL MODEL
I’m rereading Justin Snedegar’s paper, “Reasons, Oughts, and Requirements” (2016, https://philpapers.org/rec/SNEROA). He’s interested in whether “reasons firsters” about normativity broadly speaking can account for normative requirements, given that they are distinct from normative oughts.
What needs explaining is this: (a) oughts (‘ought’, ‘most reason’) are distinct from requirements (‘must’, ‘have to’) and (b) there are one-way entailments from requirements (‘must’, ‘have to’) to oughts (‘ought’, ‘most reason’) and from oughts to permissions (‘may’).
Snedegar suggests the following model that provides something of a formal explanation of these relationships. Start with ranked options or alternatives and represent them in a vertical arrangement, bottom option worst and top option best. Now draw a horizontal line somewhere between the options, creating a class of better options (that are permitted) and worse options (that are not permitted). This is the “permission line” or PL. You cannot have no PL (this would make the narrowly deontic properties otiose in some cases) and the PL cannot go above all of the options (making all options forbidden).
The results:
(1) X being required to A is represented by only one option being above PL, but, since the option is best, X ought to A. But if X ought to A, this need not be a case in which only one option is above PL. So we have the one-way entailment from requirement to ought (and requirements and oughts are distinct).
(2) It being the case that X ought to A is always also a case of X being permitted to A because PL cannot go above all of the options (so oughts entail permissions). However, since there can be permitted options that are not best, permissions do not entail oughts.
(3) Properties like supererogation and “the least one can do” are accommodated. One does the least that one can do (or the least that one is permitted to do) when one takes the worst of the permitted options. One supererogates, at least in a formal sense, when one takes an option that is not the worst of the permitted options (and one does “more than is required” in supererogating because, in the relevant sorts of cases, one is required to take one of the set of more than one option above PL).
II. THE PL MODEL: DIG IT!
I like this model. It is limited in that it provides only a formal explanation as to why the permission-status of an option tracks the simple ranking of the options as it does (to yield the entailments and for the two distinct normative rankings or evaluations not to vary independently). And it does this only through stipulating that there is always a PL and that the PL cannot be placed above all of the options. A more complete account, perhaps not merely formal in nature, would provide rationales for these stipulations. (I think the former stipulation can be justified as inherent in the subject matter, but the latter cannot. Regarding the second, I suspect that there are option-rankings in which none of the options is permitted — e.g., tragic prudential and moral choice situations. If this is right, then perhaps the second stipulation is an abstraction that often but not always applies but allows us to have relatively simple, coherent systems of deontic logic.)
Another nice thing that the model does is give us a neat way of thinking about scenarios that are often characterized in terms of “conflicting requirements” (e.g., tragic prudential or moral choice situations). The model (or the extended, more general version of it, in which the PL can be placed above all of the options) suggests that we should think of these cases as cases in which none of one’s options are permitted (i.e., they are all forbidden). And here’s the rub that, arguably, often leads to thinking about these cases in the wrong way: normally — but not in such cases — when an option is forbidden, one is required to do something, viz., take one of the options that is not forbidden. So, normally, we can infer from an option being forbidden to a corresponding thing being required (viz., it being required not to do the forbidden thing, whether or not there is more than one option for doing so). But it is mistake to make this inference in these PL-above-all-options scenarios and, when we do so, we mistakenly think that there are two (or more) “conflicting requirements” present — and we are tempted to think of particular moral requirements as specific, contributory valences that go into making the overall ranking (as many now interpret Ross’s idea of prima facie duties). Wrong! The model here suggests that, in Kant’s famous case, one (morally) ought to lie, but in doing so one does something that is not permitted (forbidden). One is not doing the only permitted thing (the required thing). Using somewhat different language, all our options in such a case are unacceptable — even the best one.
III. HOW EXTANT “TWO KINDS OF REASONS” REDUCTION STRATEGIES FAIL
Enough with the PL model and the digging of the PL model.
Snedegar goes on, in his paper, to criticize various extant (and his beefed-up for the sake of argument) versions of introducing “two distinct kinds of reasons” (or distinct respects of having-reason or normative valence) in order to account — in a substantive and reductive way — for the one-way entailment data that his model accounts for in a non-substantive, formal way. So, e.g., Matthew Bedke takes [X being required to A] to come to something like [everyone having most reason to require of X that she A]. And Joshua Gert has the first thing come to [X’s “justifying reasons” against her A-ing not being strong enough to overcome her “requiring reasons” to A]. Snedegar criticizes both of these views, figures out a way to fix some their problems, but still finds these approaches wanting because they cannot explain, in a non-ad-hoc way, why the two dimensions of evaluation for options necessarily line up as they do (in the way that explains the one-way entailments). Conclusion: it is not easy to “reduce requirements to reasons” and the regular and beefed-up versions of the Bedke-style and Gert-inspired approaches have big, probably fatal, problems.
IV. A BETTER STRATEGY: PUT FITTING-ATTITUDES MEAT ON THE PL MODEL
However, whether or not we count the resulting approach as “reasons-first” (or even “having reason” or “specific normative valence” first) — I’ll remain neutral here — the more appealing approach for providing something like a substantive, reductive account of requirements (and all the other narrowly deontic normative properties) is simply to add some substantive meat to Snedegar “PL line” model. And there is an obvious move to make here: treat the permission-status of options as a fitting-attitudes feature. Perhaps the most general and intuitive feature to make use of here is acceptability. Options not only have a degree-of-choiceworthiness ranking but, in virtue of more-specific features of this ranking (e.g. intrinsic negative or positive valences) they are either worthy of acceptance or not. (It is perhaps better to think of acceptability in terms of not being worthy of rejection, but let’s keep things simple here.) And one might analyze fitting acceptance in terms of one having most reason (of a particular, relevant sort that one will have to account for) to accept. However we think of reasons and having-reason, this would seem to be a kind of generalized reduction to specific, in-principle merely contributory, normative valences or elements.
In addition to having the right sort of aim from the outset, this approach provides an intuitively appealing answer to a broad problem that Snedegar poses for any view that appeals to two distinct but related rankings or evaluations: we need to say just why the two rankings or evaluations covary in such a way as to vindicate (and explain) the one-way entailments. (Why, when everyone has most reason to require of X that she A, must it also be the case that A has most reason to A, as Bedke would have it? Why, when X’s “justifying reasons” against A-ing are not strong enough to overcome X’s “requiring reasons” to A, must X have most “justifying reason” to A, as the Gert-inspired view would have it?) The relevant relationships are not only explained by the Snedegar’s model, but it is highly intuitive that, if any or all of one’s options are unacceptable, this is because they are worse than the others or otherwise bad. It would be crazy to say, ‘I ought to finish this paper tonight. But the only acceptable option for me is to not finish it. That is what I must do!’.
V. MEETING SNEDEGAR’S CHALLENGE: EXPLAINING THE COVARIATION (EASY CASES)
I think this intuition is pretty decisive in the first-person prudential (and perhaps rational) cases — intuitively, the acceptability seems to work the right way here. However, we might still want a deeper explanation of this tight relationship between choice-rankings and minimal, accept-or-not or reject-or-not response rankings. Maybe something like this is true: some of the elements that speak in favor of choosing an option also decisively speak in favor of it being acceptable. The question is what those elements are (or how they do this work in any given sort of case). For example, perhaps that not finishing the paper tonight fills me with dread both speaks against not finishing it tonight and makes not finishing it unacceptable (so that — if it is not also unacceptable to finish it tonight — finishing tonight is the only acceptable option; I must get it done!). Of course, there is a huge promissory note at the center of this explanation, regarding what such dual-response-ranking elements are and why or how they do what they do.
VI. “GOING STRUCTURAL” TO TACKLE THE HARD CASES (MORALITY)
More importantly, when we think about moral normative requirement and how my fitting-acceptance (and having-most-reason-to-accept) proposal might work for explaining how the choice-worthiness and acceptance-worthiness correlate as they do, things get more complicated and potentially go off the rails. For the relevant sort of acceptability here is acceptability to anyone, not just the agent — yet the normative choice-ranking is agent-centered or just “for” the agent. It is less clear why this sort of acceptability should line up in requisite ways with the choice-worthiness ranking (to yield the one-way entailments).
However, there is a helpful move to make here. We can tell a structurally similar explanatory story at the level of (choice-relevant and acceptance-relevant) standards while remaining neutral on whether those standards are normative. In the moral case, we regard the standards as non-normative (or regard them only in their structural, non-normative aspects) — and then there will be a distinct explanatory story to provide regarding how or under what conditions the relevant standards are normative for the agent (or for everyone, as the case may be). So we might say something like this: some elements that go into the impartial moral ranking of options that an agent might take (choice axis, impartial moral evaluation and ranking) also determine a kind of acceptability or unacceptability of those options for anyone (acceptability axis, impartial moral evaluation and minimal acceptance/non-acceptance ranking). Maybe one such element is something like an option counting as a lie.
It is then a separate issue how the impartial, moral choice-ranking here maps onto all-things-considered normative choice-ranking (and the conditions under which this agent-centered choice-ranking overlaps, in an isomorphic way, with the impartial, moral choice-ranking). And similarly for the agent-neutral moral acceptability standards (how and why and when these map onto agent’s having reason of the right sort to accept or reject options).