WHEN A SOCIETY OUGHT TO BE SOME WAY

What does it mean to say that a certain institutional arrangement P in some society S ought (or is morally required) to be? Maybe that comes to this: S is required to come up with and implement a plan to achieve S. And perhaps that, in turn, comes to something like this: each individual and collective agent in society is required to make reasonable efforts, relative to role or position, to promote S (all of us collectively) coming up with and implementing a plan to achieve P. Different agents in different roles would have different more-specific requirements.

Is this kind of analysis standard? What are the alternatives?

If this analysis, or something very much like it, is right, there would seem to be some important results that I don’t think are always acknowledged in discussions of justice with regard to the basic structure of a society.

Continue reading

HOW EXTREME UNLIKELIHOOD MIGHT BLOCK REQUIREMENT SPECIFICALLY

Suppose that general normative requirement works like this: if X is generally required to A, this is partially constituted by X’s not-A-ing options in her choice situations starting out with a very high negative valence (that generally swamps any negative valence of the not-A-ing options). Now suppose that, in particular choice situation S, it is super-unlikely that X will pull off A-ing. In such a case, the relevant option is really her attempting to A. But also any attempt to A is almost certain to come to her not-A-ing. It seems plausible, then, that all of X’s options in S have nearly the same magnitude of highly negative valence. So there is not, as there would normally be, some huge “valence gap” between (token) A-ing and (token) not-A-ing. There is no normative “swamping” to leave A-ing as the far-and-away best option. And so, despite being under a general requirement to A, X is not, in S, required to A (realize this token of A-ing). 

Continue reading

THE OUGHT-DEFEATING WORK OF UNLIKELIHOOD?

Having reread David Estlund’s “Human Nature and the Limits (If Any) of Political Philosophy” (2011) – I had read it or similar material from Estlund years ago – I have some thoughts. Here is one. (For a variety of more shooting-from-the-hip points, of varying quality and level of present endorsement, see David Potts’ more-comprehensive critique Estlund’s article and specifically my comments there (Estlund’s Defense of Ideal Political Theory).)

*****

If agent X being unable to perform action (or carry out plan) A entails that it is not the case that X ought to perform A, why is this? This answer seems plausible: because this rules out A-ing as an option for X in her deliberation and decision-making. If this is right, then why suppose that only the ability/inability binary is relevant? Why not a cut-off in a relevant scalar quantity? Specifically: perhaps if it is unlikely-enough that X will pull off A (maybe or maybe not due to relevant deficits in X’s internal, psychological abilities), then A-ing is not an option for X — and so it is not the case that X ought to A.

Continue reading

a thought or two prompted by reading chapter one of Gaus’ “The Tyranny of the Ideal”

Suppose that, for a certain type of cooperative endeavor in a certain type of circumstance, the only appropriate fairness-pattern (in the distribution of benefits and burdens) is equal shares of what is produced (as long as a certain minimum effort, of a certain minimal quality, is put forth). So, we do the thing, everyone crosses the effort/quality threshold, and we distribute the fruits of our labor equally. Is the distribution perfectly or completely fair or just?

Not necessarily. Maybe my contribution involved my unfairly acquiring something (say, wood for a fire that needed to be fed) from someone. Or maybe, though I traded fairly to get my wood, the person I got it from obtained it from some other person unfairly. The general pattern here (that need not involve anything like a chain of transactions a la Nozickian procedural justice) is: (social state of affairs) that-P is just only relative to the justice of (relevant social circumstance) that-Q; but it might be that, if that-Q is just, it is just only relative to the justice of (further relevant social circumstance) that-R; etc. Though there is no reason why this explanatory chain has to be super-long or super-complicated in all scenarios, at the level of evaluating whole societies and the complex interactions, norms and institutions that compose them, some considerable number of salient justice-evaluable circumstances and some considerable complexity should be expected. But that pushes us toward the idea that ideals of perfect or complete justice are unmanageable and quixotic. 

Continue reading

A FUNDAMENTAL RESPECT IN WHICH SCANLON SEEMS TO BE WRONG ABOUT MORAL WRONGNESS (AND WHAT THE CORRECT APPROACH MIGHT BE)

I think Scanlon’s main thing, his account of moral wrongness, asserts an implausible explanatory relationship. Arguably, it says something like this: morally wrong actions are those actions that would be disallowed according to a principle of public, collective disallowing (“discouraging”) that, if followed, would not result in anyone being wronged (mistreated, abused, etc.). 

This is funny at least because morally wrong actions that are wrongings of persons seem to be morally wrong because the actions themselves are wrongings of persons. Why should something like [the public, collective disallowing of an action] not being a wronging of a person be relevant to the disallowed action being morally wrong?

Continue reading

resenting you, rationally

Suppose I believe that you have insulted me unprovoked and I have some, but not sufficient, reason for this belief (we’ll be setting aside entirely whether you have actually insulted me unprovoked and hence whether my resenting you for what you have done would be correct). In a certain familiar sense, it is not rational for me to resent you for what you have done (there is more rational support for the not-resenting than for the resenting). This is the same sense in which I am not justified in believing Q if, though I believe that P and that P implies Q, I’m not justified in having one or both of these beliefs.

Continue reading

why (and in what sense) is there always reason to object when there is reason to resent?

Here’s a puzzle. Or at least something that we might want to have a good explanation of. Intuitively, one having reason to have some particular type of attitude (including some particular type of moral, reactive attitude) is tightly, necessarily or essentially connected to one having reason to do things that one tends to do when one has the attitude (or that tend to “go along with” having the attitude). For example, when I have reason to resent you for how you have treated me, I have reason to object to you (or the community at large) for your treating me this way (and also: complain, protest, resist, demand apology, demand compensation, etc.). Plausibly, if it is appropriate for me to resent, then necessarily it is appropriate (in some related way) for me to object (even if, all things considered, I have more reason to refrain from objecting than to object); and, conversely, if it is appropriate for me to object (in the requisite way), then necessarily it is appropriate for me to resent. Yet: we have two distinct responses here, PHI-ing and PSI-ing and, if this is all the information we have, we should suppose that having reason to PHI and having reason to PSI are not connected in any necessary or essential (or even systematic but conceptually or metaphysically contingent) way. Why does having reason to resent have anything at all to do with having reason to object?

Continue reading

RESPECTFULLY, I RESENT

If I’m resenting the things that I should resent and not resenting the things that I should not resent, I’ll resent you for just up and insulting me out of nowhere. But I won’t resent you for insulting me if you have good reason (or reason of the right kind) to insult me. Similarly, if you negligently do me harm or knowingly (or intentionally) harm me.*

If, as I think we should, we read ‘you have good reason (or reason of the right kind) to insult me’ as referring to fact-relative or objective normative support for the insulting, then appropriate resentment (and non-resentment) is sensitive, in part, to the reasons of (or what matters to) the person who would insult one. And that, I think, is an important result, for it implies that “taking the interests of others into account” (a rough but apt phrase) is built into the standards that govern our reactive attitudes (or at least this reactive attitude). I think this is an interesting way of explaining our taking others into account – as agents, as rational beings, as beings with things that matter to them, not just as ordinary furniture of the universe or generic circumstances relevant to setting goals and making plans – at a basic psychological and normative level. Continue reading

Why we shouldn’t complain quite so much about complaint theory

In Ch. 4 (“Wrongness and Reasons”) of Thomas Scanlon’s WHAT WE OWE TO EACH OTHER, Scanlon introduces us to the basic idea of his “contractualist” theory of moral rightness and wrongness. Specifically:

an act is wrong if its performance in the circumstances would be disallowed by any set of principles for the general regulation of behavior that no one could reasonably reject as a basis for informed, unforced general agreement.(p. 153, WWO)

There are many elements here to unpack, in order to fully understand Scanlon’s view. But it is in a certain family of views of moral wrongness (or moral wrongness that is also the wronging of a person): what Derek Parfit calls “complaint theories” of moral wrongness. On this kind of view, roughly, an action is morally wrong just in case (and because) someone would have sufficient reason to complain about it being performed or publicly allowed (the action being, in this sense, unjustifiable to others).

Continue reading

ON APPROPRIATELY FEARING THE REAPER (HOW THE FITTINGNESS OF FITTING ATTITUDES IS NOT A FUNCTION OF WELFARE-VALUE)

In the MTSP discussion of the third chapter of Scanlon’s WWO, on well-being, I brought up the following as a case of generic normative pressure (for an agent) that does not consist in the realization or promotion of some inherent benefit (for that agent): one having reason (or it being appropriate to) to fear scary things.

My suggestion was met with vociferous protest (from Irfan and David R.). If any response is tightly connected to standards of well-being, it is the fear response! Classroom to Calvin (Calvin and Hobbes): “Bat’s aren’t bugs!” But I suspect that I was misunderstood (and was not, myself, clearly distinguishing the claim I meant to be making from other, somewhat similar claims).

Continue reading