Suppose I believe that you have insulted me unprovoked and I have some, but not sufficient, reason for this belief (we’ll be setting aside entirely whether you have actually insulted me unprovoked and hence whether my resenting you for what you have done would be correct). In a certain familiar sense, it is not rational for me to resent you for what you have done (there is more rational support for the not-resenting than for the resenting). This is the same sense in which I am not justified in believing Q if, though I believe that P and that P implies Q, I’m not justified in having one or both of these beliefs.
However, things get more complicated when we add relevant sorts of higher-order beliefs (perhaps implicit). Suppose I regard my belief that you have insulted me unprovoked as sufficiently supported by evidence that I have (though it is not). Now, it seems, it is rational for me to resent you for what you have done (despite my believing only on insufficient evidence that you have insulted me unprovoked). After all, I both believe that you have insulted me unprovoked and I take myself to be justified in thinking this.
How do we square these two different ways in which a “conclusion” attitude can be rationally supported? A certain kind of “squaring” is easy: we have, at a certain level, just one form of rational support – rational support for an attitude that is a function of (and relative to) one having the relevant premise-type attitudes and these having some kind of positive rational status (e.g., being known, being supported by sufficient reasons or evidence, being thought to be known or thought to be supported by sufficient reasons or evidence). We don’t have two fundamentally different kinds of rational support.
(Noticing that rational support can occur on the basis of beliefs that are not adequately rationally supported, at least not in the standard first-order-attitude-involving way, one might conclude that belief per se – regardless of it having any positive rational status at all – provides a distinctive kind of rational, normative support for attitudes and actions. My picture, based on the same sorts of cases, is meant as an alternative to this. Though there might be other grounds for thinking that there is a “purely subjective” sort of rational support, I’m doubtful that there is such a thing.)
What happens – how do we figure out overall or all-in rational status – when these two avenues for rational support (in accordance with a given relationship or rule) conflict? I’ve already indicated how one sort of case (that of inadequate first-order support, but higher-order belief that there is such support) plays out. Similarly (and similarly simply asserting my intuitive take), in the converse case, the case of adequate first-order support but higher-order belief that this is not present, we should say: the rational thing for me to do is not resent you for what you have done.
(Here is a general sort of picture that might vindicate my intuitions regarding how these two different ways that an attitude might garner rational support are related. This: there are always some sort of higher-order beliefs about our first-order beliefs (perhaps information about how our different attitudes function – what they aim at and how – would suffice) doing the work (this part of the job in making for rational, normative support). What we take to be purely first-order cases, then, are not really such, but we think they are because, typically, we at least implicitly realize it when our first-order reasons support our attitudes (or have other higher-order beliefs that do this same work). Such perhaps-implicit “reflective” representation of “what we are doing,” then, would be part of being rational (with a distinctive, deciding role for this and other relevant higher-order beliefs, usually implicit). If this is right, then a literally purely first-order case would be a case of movement from one attitude to another in a functionally correct way that is not rational in the sense of involving rational normative support for attitudes (e.g., moving from belief to belief, in accord with rules of logical validity, such that, if one starts with beliefs that are true, the output or conclusion beliefs will be true as well). That, plus a mechanism that tends to start out with true premises or knowledge, constitutes a good, reliable way of representing the world accurately. It just would not be a knowing, self-guided rational system for doing this.)
I wrote a response to this a few days ago, but lost it, so here’s a reconstruction.
It seems to me that you’re conflating two different conceptions of “rational,” an epistemic one and a deliberative one. In the first paragraph, you say that the person lacks “sufficient” evidence that he’s been insulted without provocation. “Sufficient” here means: lacks sufficient evidence to be epistemically justified in the belief that he’s been insulted. So when you say that the person’s further inferences from that original belief are not rational, what you mean is that they are not epistemically justified.
If we stick to that conception of rationality, then it doesn’t change things to add higher-order beliefs to the person’s doxastic set. Suppose he believes that he’s been insulted without provocation, and then, on reflection, takes himself to be justified in that belief, and takes himself to be well circumstanced epistemically to make beliefs of this kind, etc. etc. Ex hypothesi, he’s wrong about all that. So if insufficiency for justification rendered him non-rational in his reaction to the other person at the outset, nothing has changed. If anything, his situation is worse.
It seems to me that you start with an epistemic conception of rationality, and then switch to a deliberative one involving strongly internalist strictures. On this view, as long as the agent’s beliefs, desires, and other attitudes all cohere without internal conflict, he’s “rational.” Whatever the merits of that view, I guess what I would say is that it’s not well-motivated by the way you set the example up in the first paragraph.
LikeLiked by 1 person
The general question is how to characterize the sense in which one is rational in inferring Q from [P implies Q] and P, when one lacks sufficient evidence for either or both of believing that P or believing that P implies Q (I’m just applying this to “inferring” resentment from the belief that you have insulted me unprovoked — or perhaps from the belief that, because you have insulted me unprovoked, you have done something that is resentment-worthy). One is not rational in believing Q in this sort of case in the sense of being epistemically justified. One is rational in some other way.
One might simply deny that there is any such sense in which one is rational in believing that Q (or in resenting you for what you have done). But I’m accepting the intuition here. I’m rejecting a certain account of it, according to which the rational support (definitive in its way) is relative simply to having the premise-attitudes (the relevant beliefs). It seems more sensible to me to believe something like this: the premise attitudes (beliefs) get the relevant sort of inference-underwriting status (again, not full-blown rational justification) by way of our having certain (perhaps typically implicit) higher-order beliefs about the first-order attitudes (beliefs) — beliefs like that the first-order attitudes aim at being true and are true or likely true.
I’m not sure that is right. I agree that this sort of thing does not affect attitudes being rationally justified in the standard sense. But I’m suspicious of the flat-footed concept of “purely subjective” (merely belief-relative) rationality (or rational support). And I suspect that rational justification is achieved in part by way of having such (general, implicit) higher-order beliefs — so that these sorts of beliefs are plausible candidates for doing this kind of work.
Regardless of whether my suggestion is right, we might think that this “lesser” form of rational support is the immediately action-guiding (or response-guiding) sort of rational support, whereas being rationally justified is more rigorous and reflective, involving calling into question beliefs that you might well have already inferred from, relied upon, etc.
This is, admittedly, pretty far in the weeds. The context is thinking about the different senses in which resentment might be “appropriate”: (i) it might be correct (you have in fact insulted me unprovoked), (ii) it might be rationally justified (I know or am justified in believing that you have insulted me unprovoked), (iii) it might be rational in the lesser sense of it being based on the belief (not necessarily justified) that you have insulted me unprovoked. The question here is just how to think about this last sort of “appropriateness” status (the merely-belief-relative or “purely subjective” rational support interpretation, my suggested interpretation in terms of relevant higher-order belief providing for a kind of rational support — and there are probably other interpretations).