Suppose I believe that you have insulted me unprovoked and I have some, but not sufficient, reason for this belief (we’ll be setting aside entirely whether you have actually insulted me unprovoked and hence whether my resenting you for what you have done would be correct). In a certain familiar sense, it is not rational for me to resent you for what you have done (there is more rational support for the not-resenting than for the resenting). This is the same sense in which I am not justified in believing Q if, though I believe that P and that P implies Q, I’m not justified in having one or both of these beliefs.
However, things get more complicated when we add relevant sorts of higher-order beliefs (perhaps implicit). Suppose I regard my belief that you have insulted me unprovoked as sufficiently supported by evidence that I have (though it is not). Now, it seems, it is rational for me to resent you for what you have done (despite my believing only on insufficient evidence that you have insulted me unprovoked). After all, I both believe that you have insulted me unprovoked and I take myself to be justified in thinking this.
How do we square these two different ways in which a “conclusion” attitude can be rationally supported? A certain kind of “squaring” is easy: we have, at a certain level, just one form of rational support – rational support for an attitude that is a function of (and relative to) one having the relevant premise-type attitudes and these having some kind of positive rational status (e.g., being known, being supported by sufficient reasons or evidence, being thought to be known or thought to be supported by sufficient reasons or evidence). We don’t have two fundamentally different kinds of rational support.
(Noticing that rational support can occur on the basis of beliefs that are not adequately rationally supported, at least not in the standard first-order-attitude-involving way, one might conclude that belief per se – regardless of it having any positive rational status at all – provides a distinctive kind of rational, normative support for attitudes and actions. My picture, based on the same sorts of cases, is meant as an alternative to this. Though there might be other grounds for thinking that there is a “purely subjective” sort of rational support, I’m doubtful that there is such a thing.)
What happens – how do we figure out overall or all-in rational status – when these two avenues for rational support (in accordance with a given relationship or rule) conflict? I’ve already indicated how one sort of case (that of inadequate first-order support, but higher-order belief that there is such support) plays out. Similarly (and similarly simply asserting my intuitive take), in the converse case, the case of adequate first-order support but higher-order belief that this is not present, we should say: the rational thing for me to do is not resent you for what you have done.
(Here is a general sort of picture that might vindicate my intuitions regarding how these two different ways that an attitude might garner rational support are related. This: there are always some sort of higher-order beliefs about our first-order beliefs (perhaps information about how our different attitudes function – what they aim at and how – would suffice) doing the work (this part of the job in making for rational, normative support). What we take to be purely first-order cases, then, are not really such, but we think they are because, typically, we at least implicitly realize it when our first-order reasons support our attitudes (or have other higher-order beliefs that do this same work). Such perhaps-implicit “reflective” representation of “what we are doing,” then, would be part of being rational (with a distinctive, deciding role for this and other relevant higher-order beliefs, usually implicit). If this is right, then a literally purely first-order case would be a case of movement from one attitude to another in a functionally correct way that is not rational in the sense of involving rational normative support for attitudes (e.g., moving from belief to belief, in accord with rules of logical validity, such that, if one starts with beliefs that are true, the output or conclusion beliefs will be true as well). That, plus a mechanism that tends to start out with true premises or knowledge, constitutes a good, reliable way of representing the world accurately. It just would not be a knowing, self-guided rational system for doing this.)