Is Knowledge metaphysically (or conceptually) prior to belief?

Coming out soon, J. Adam Carter, Emma C. Gordon and Benjamin Jarvis have an anthology on the “knowledge first” approach to epistemology and mind (based on the work of Timothy Williamson in his Knowledge and Its Limits).  (Maybe the volume is already is out, but I could not find it on the interwebs.)  Their introductory essay contains some clear and insightful explanatory summary of various knowledge-first theses (including what they take to be the central one) and discusses a central motivation for the knowledge-first approach (and what they take to be its central thesis).  Here is that essay:

https://www.academia.edu/23517268/Knowledge-First_An_Introduction

And here are some excerpts/summary from this text and some commentary from me (in bold).  

(If there is a theme to my recent philosophical commentary here at PoT, it is the importance, in many cases, of understanding how things function (or what their function is) in order to understand better what they are.)

  1. (p. 1, para. 1)  “Knowledge-first epistemology is (in short) the idea that knowledge per se is an epistemic kind with theoretical importance that is not derivative from its relationship to other epistemic kinds such as rationality [belief, justification, truth].  Knowledge-first epistemology is rightly associated with Timothy Williamson (2000) in light of his influential book, Knowledge and Its Limits (KAIL). In KAIL, Williamson suggests that meeting the conditions for knowing is not constitutively explained by meeting the conditions for anything else, e.g., justified true belief 1 . Accordingly, knowledge is conceptually and metaphysically prior to other cognitive and epistemic kinds. In this way, the concept know is a theoretical primitive. The status of know as a theoretical primitive makes it particularly suitable for use in making substantive constitutive and causal explanations of a number of other phenomena, including the nature of belief, the nature of evidence, and the success of intentional actions 2.”

The core idea here is simply that (a) knowledge (not anything that might in some sense constitute or compose it) has its own distinctive explanatory work to do in the epistemic realm (just what this constitutive/explanatory work is would need to be filled in).  But this is consistent with knowledge being constituted (and constitutively explained) by justification (or rationality), belief, truth (along with and some final element that rules out the merely-lucky confluence of the other elements – the ‘jtb+’ approach).  What it is not consistent with is the identity-reduction of knowledge to these other elements leaving knowledge itself with no explanatory work to do.  If so then why endorse the more dramatic claims that are more typical of the knowledge-first approach to the metaphysics and epistemology of belief and knowledge?  

  1.  (p. 1, para. 2 & p. 2, para 1)  “As just indicated, Williamson takes the view in KAIL that knowledge—considered as a kind or type—has no constituents. (This should not be confused with the view that instances of knowledge aren’t at bottom physically constituted—Williamson is, in fact, a physicalist 3 .) This negative idea seems to be that there are no further kinds that constitute knowledge when collectively instanced; there is no correct theory that identifies the kind ‘knowledge’ with some mix of distinct epistemic and cognitive kinds meeting specifiable conditions. Nevertheless, in KAIL, Williamson also offers a positive characterization of knowledge as the most general factive mental state 4 …   Williamson takes factive mental states to be at least on a par with non-factive ones 7 . Moreover, with respect to the (allegedly) central mental factive state—knowing—and the central cognitive non-factive correlate—believing—Williamson is clear that the former is no less explanatory than the latter. Even if it is possible to understand knowing as a kind of “apt” believing 8 , it is also possible to understand believing as a kind of “botched” knowing 9.”

So here we have some of the stronger knowledge-first claims.  Viz., (b) knowledge is metaphysically primitive (relative to other epistemic or cognitive kinds) and (c) belief can be explained (and understood) in terms of knowledge (as well as vice versa).  Presumably, the mode of explanation here is not precisely constitutive.  Perhaps the idea is that (c.1) both belief and knowledge are constitutively primitive (relative to all other epistemic or cognitive elements) but nevertheless are necessarily related to each other in ways (that philosophers might characterize as a metaphysical entailment relationship) such that each partially explains the other.  Regarding the epistemic analogue to explanation – understanding – the idea here might be that (c.2) we can understand the concept of belief through understanding the concept of knowledge as well as vice versa.  (I’m not sure, but it may be typical of knowledge-firsters, and perhaps Williamson in other moods or in other work, to claim that (c.2*) we come to have and understand the concept of belief through having and understanding the concept of knowledge.)

For the record:  I’m sympathetic to (a) and (c.2*), but not so sympathetic to (b) and (c.1).  What interests me more are what the authors give as the main explanatory motivation for the knowledge-first program (whether just (a) or the other claims as well).  That motivation is providing a (constitutive) broadly functionalist explanation of what beliefs are in terms of what they (the, or the various, not-necessarily-belief-constituting mental or physical elements) tend to do and how they tend to do it.  So on to that.

  1.  (p. 2, paras. 2 & 3; p. 3, para. 1)  “A central project within epistemology is to understand the proper assessment of belief. A central project within the philosophy of mind is to understand what a belief is. A not wholly implausible idea is that these central projects are, in fact, related. To understand better what a belief is, one needs to think about what happens when belief goes right, and to understand better what happens when belief goes right, one needs to think about what beliefs are…  For some time, the dominant approach to the theory of belief has been functionalism (at least broadly construed)—so that, to a first approximation, beliefs are what they do, i.e. believing any particular proposition is largely a matter of occupying a certain role 10. Arguably, belief plays a number of roles—assertions express them, actions are based on them, topical understanding consists in them, and so forth. Consequently, the approach of understanding belief by understanding its proper assessment might begin by considering what it is for belief to go right in each of these roles…  A natural suggestion…  is that going right for belief is a matter of knowing. Williamson defends individual theses about the explanatory primacy of knowledge in KAIL—e.g., that it is the standard for proper assertion 11 , that it is central to the explanation of action 12 —and others have defended further theses—e.g. that it is required for topical understanding 13 . Arguably, a unifying feature of these individual theses is that the phenomena at issue are closely associated with belief—so that belief might even plausibly be at least partly constituted by its role in each case. A knowledge-first addition to this last plausible idea is that the role that beliefs play generally is parasitic on the role that they play when things go right so that the belief qualifies as knowledge: there are surfeit of ways for a belief to fail in assertion, in action, in understanding etc., but we understand how beliefs can fail in these ways by considering what happens when they don’t fail—because the subject doesn’t merely believe but rather knows.”

Okay, there is a lot here.  First, there is the idea that (d) belief, as a kind, is functionally constituted (by doing a multiplicity of things, hence the different functional roles that constitute belief).  Second, there is the idea that, (e) when things have functional roles they have success-conditions (with respect to both the outcomes that they tend to produce and with respect to instantiating, in the right ways, the characteristic procedures that tend to produce those outcomes) and hence are subject to broadly evaluative/normative standards.  (In this sort of case, having a function might come to nothing more than physical structures in an organism being finely-calibrated in such a way that, in the type of environment that they are suited or adapted to, they tend to achieve certain results in specific, characteristic ways.  Also ‘normative’ here does not mean normative in the action-guiding sense that is relevant to what we morally, not-morally, or generically should or have reason to do.)  I’m quite sympathetic to both (d) and (e).

I find the suggestion that things “going right” for belief is constituted by belief being knowledge puzzling.  Functional-role evaluation has two distinct elements – appropriate means/procedure and the achievement of relevant outcomes (that tend to be produced by the means/procedures).  But, at least on an initial, naive reading, the suggestion seems to be that knowledge is something basic in the cognitive or epistemic realm, the achievement of which constitutes the sole sort of general success in believing.  Setting aside the metaphysical claim, it is plausible that knowledge constitutes success in both ways at once and that it is this robust sort of success-for-belief that is most explanatorily important (e.g., in explaining the systematic – not merely episodic – success in the agential pursuit of aims or carrying out of plans).  And this fits well with what the authors take to be the central or most important claim of the knowledge-first approach, which is pretty much just the idea that knowledge is a natural kind.  But given that there are two different sorts of general success in believing – constituted by achieving truth and coming to and maintaining one’s beliefs rationally or with sufficient justification – there is no sin in splitting them up analytically and seeing what work they do.  And it makes sense that one would need to do this, if not for everyday explanatory purposes, then for a more sophisticated, detailed account of what success in belief is.  Of course, if truth and rationality do their own work and are necessarily correlated with knowledge, this cuts against the metaphysical thesis that knowledge is not (in part) constituted by truth and rationality/justification.

The next paragraph in the essay (not quoted above) suggests that the authors are somewhat alive to this issue.  What they say may even indicate that, true to their initial characterization of what is essential and important in the knowledge-first approach (at least as an account of what belief and knowledge are, regardless of the order of concepts or understanding) and in line with my expressed sympathies, the defensible metaphysical claim is simply that knowledge is a natural kind (or that it, not belief or true belief or rational belief, is the most explanatorily important natural kind).

  1.  The authors close the substantive part of the essay by examining two objections to the idea that knowledge constitutes the explanatorily important standard for success in belief.  The first is that belief does all sorts of things and it is not clear that it is always via the same route.  So perhaps there is no one thing that constitutes functional-cum-normative success in believing (and the one thing certainly is not knowledge).  The authors note both that this kind of radical difference in functional role might not be true and that, if it is true, we would do well to treat the outliers as being states that do not count as beliefs (at least not in the full-blooded sense that is correlated with good explanatory individuation of types).  So this objection to the explanatory thesis is not so powerful.  More powerful, perhaps, is the idea that on-off all-or-nothing belief – and hence knowledge – is not explanatorily important; rather, degrees of credence are.  The authors say a bit about this in relation to how some of Williamson’s theses might bear on the problem, but do not offer anything much of their own by way of a response.  The threat here to the knowledge-first program is put in terms of belief, and hence knowledge, turning out to be epiphenomenal (in some sense not fully real or not as real as credence and degrees of credence).

My initial response is this:  degrees of credence, like truth in belief and rationality in belief, probably do important explanatory work, relative to some things and at a certain pretty-fine-grained level of explanation.  But this is consistent with belief, relative to different explanatory work that is not quite so explanatorily ambitious at the level of detail, doing this work when credence cannot (at least not for creatures like us, with the kinds of concepts we are capable of having and kinds of propositions we are capable of entertaining).  The explanatory power of belief (true belief, rational belief) is consistent with the distinct explanatory and important causal/explanatory work that knowledge does.  Of course, this is at best an implied and schematic defense of each of the explanatory/natural-kind theses about belief (as against credences) and knowledge (as against belief, true belief, rational belief).  And of course the metaphysical issues about the priority relationships between properties/kinds must be carefully distinguished from the priority relationships between concepts (or the order of understanding).  I suspect that, with regard to the central theses here concerning the properties/kinds (and the relation to doing important explanatory work), the authors would be in broad sympathy with my answer to the worry that the reality and fundamentality of degrees of credence rendering belief, and hence knowledge, epiphenomenal.  If “reality” (or degrees of such) is determined by or correlated with there being good explanatory work that gets done, then there is a clear strategy, and a plausible one, for maintaining that both on-off all-or-nothing belief and knowledge are real elements that do important work in explaining such things as truth or accurate representation, achieving scientific understanding, systematic success in achieving agential aims (and probably much more).

20 thoughts on “Is Knowledge metaphysically (or conceptually) prior to belief?

  1. Michael,

    Your post has resurrected these issues for me after a long absence. I haven’t read Carter et al.’s essay, and it’s been a while since I read Williamson’s book, but I think I still might be able to contribute something useful, so here goes.

    I was positively impressed with the content of Williamson’s book. (The writing itself is atrocious in the way of much academic writing. Let’s just say: Williamson is no Steven Pinker.) I was persuaded that he basically has it right and that what he’s saying is important.

    The form of “knowledge first” theory that makes sense to me can be understood by analogy with a “perception first” approach to sense-perception. Let us refer to veridical perception by “perception” and its cognates (“percept,” “perceiving,” etc.) and to nonveridical perception by “appearance” and its cognates. Thus, when I see the coffee cup on the desk in front of me, I perceive it. But if there really is no coffee cup, but only appears to be—through a trick of the light, mirrors, and holography somehow—then I do not perceive a coffee cup, I merely have an appearance of one.

    Now, what is the relation of perception to appearance? For instance, which comes first? It seems obvious that perception comes first. Appearance didn’t evolve independently, perhaps so we could enjoy hallucinations in our spare time. Appearance has no independent reason for being. Appearance did not pre-exist perception, so that one day by happy accident a perception came to be constructed out of appearance and other factors. Nor did perception evolve by appearances getting better and better until, one day, perception emerged. Perception must have been a form of cognitive contact with external reality from the start—primitive at first but growing in sophistication over evolutionary time—or nothing could have driven its evolution. Nor is perception a successful appearance. Nor do appearances “aim at veridicality.” These expressions assume an independent existence for appearances that they don’t really possess. Instead, the primary process is perception. Perception, not appearance, is what our sensory systems exist to support. When our sensory systems fail, for any of various reasons, we receive appearances instead of percepts. An appearance is thus a would-be percept that failed. But a percept is not an appearance that succeeded; when sensing is successful, there is no appearance at all.

    I suggest we look at the relation of knowledge and belief analogously to the way I have outlined the relation of perception and appearance. Just as sensing didn’t evolve to provide us with appearances, so cognition more broadly didn’t evolve to provide us with beliefs. Rather, it evolved to provide us with cognitive access to our environment; i.e., to provide us with knowledge. Just as, if sensing were always successful, we would have no need of the notion of appearances, so if knowing were always successful, we would have no need of the notion of beliefs. Just as perception is not a happy combination of appearance with certain other factors, so knowing is not a happy combination of belief with certain other factors. Just as perception did not evolve by the gradual improvement of nonveridical appearances, so knowing did not evolve by the gradual improvement of false beliefs. And knowledge is not a successful state of believing, and belief does not “aim at the truth.” Belief has no such independent existence as these formulations imply. (Bear in mind that belief is not the same as having a notion, an idea, a conjecture, a hypothesis, etc.) It is not that knowledge is belief that succeeded, it is that belief is would-be knowledge that failed.

    None of this means that perceiving and knowing don’t have constituents. For instance, in my view perceiving has a representational content, and there is also a representational state (an internal mental state) that represents that content. But merely representing the right sort of content is not the same as an appearance. A mere representation might be stimulated by probing my brain with electrodes, for example, without giving me the least inclination to act on it or to embed it in my store of knowledge. Similarly, any sort of “knowing that” has a propositional content and some sort of mental state that represents it. But simply representing a proposition is hardly the same thing as belief. I don’t know whether appearance or belief can be defined or uniquely specified independent of perception or knowledge, but I rather suspect not. Also, I don’t think that perception or knowledge can be defined or uniquely specified in terms of any collection of constituent parts. The crucial point here, as Williamson continually points out, is that knowledge requires awareness of the fact that is known, and awareness appears to be unanalyzable. I take this to be what Williamson takes to be the main lesson to be learned from several decades of failed struggle to address the Gettier problem. I don’t think perceptual awareness is an analyzable relation any more than knowledge awareness. If so, then neither knowledge nor perception is analytically definable.

    I’m inclined to think that the above is tolerably right and also tolerably what Williamson argues for in KAIL. If so, there are some misleading aspects to Carter et al.’s account. In particular, they assign much too much status to belief. Especially the idea that knowledge could be “things going right for belief” is something I don’t think Williamson would ever allow. The whole point of KAIL is to fight this idea. But also, talk of belief having a functional role is off base. Saying belief has a functional role is like saying disease or a broken bone has a functional role. In the knowledge first view, as I am presenting it anyway, belief results when would-be knowledge fails in its functional role. Belief has no functional role of its own.

    Liked by 3 people

    • That’s interesting. I have to think more about it; something about it doesn’t sit right with me, but I can’t figure out what right now.

      For now, just a passing observation: your rationale for the “knowledge first” view coheres with quasi-folkloric accounts I’ve heard through the Objectivist grapevine of why Rand defined “knowledge” as she did, and why “belief” plays no significant role in her epistemological writings. There’s a lecture of David Kelley’s out there somewhere (I can’t remember where) in which he makes roughly the same point as the one you’re making, though not nearly as clearly or as plausibly as you do. I’m curious whether you were influenced by Objectivism on this, or whether you arrived at the view independently.

      This, by the way, is Rand’s definition of “knowledge,” from Introduction to Objectivist Epistemology, p. 35:

      “Knowledge” is . . . a mental grasp of a fact(s) of reality, reached either by perceptual observation or by a process of reason based on perceptual observation.

      http://aynrandlexicon.com/lexicon/knowledge.html

      Like

      • Something doesn’t sit right about it with me, either, and I was going to post a follow-up comment this morning anyway—not that what’s bothering us needs to be the same.

        What bothers me is that, after all, we routinely use “believe” perfectly sensibly, which shouldn’t be true if belief is only failed knowledge. This implies that belief does have some sort of positive role that should be acknowledged.

        When I think of how I and others use “believe,” I think of examples like the following.
        • “Leave it, now, Lizzy. I believe all will turn out well.”
        • “I believe that anthropogenic global warming is a fact.”
        • “He has failed every diet plan he has ever tried, and I believe he always will.”
        • “I believe a Trump presidency would be a disaster for the nation.”

        In all these cases, the speaker expresses confidence that something is true while acknowledging that he doesn’t know. To say “know” in any of these cases would be overreaching. So “believe” here is like a certain use of “think,” which could be substituted in any of these without changing the meaning substantially. (If there’s a difference, “think” seems a bit weaker; “believe” expresses greater conviction. “Believe” implies a willingness to follow through with action that “think” doesn’t necessarily do. But this is a matter of degree.) The meaning is that one feels convinced that something is true even though the case for its truth is not conclusive.

        It makes sense that there should be such a concept, since we often have to try to decide what is true in cases where knowledge is unattainable. And this concept of believing might encourage the idea that knowledge is “things going right for belief,” since belief results from trying to find the truth. But this idea is still a mistake. Knowledge and this sense of belief are functionally quite different. For instance, if the speaker turns out to be right in any of the above examples, that wouldn’t mean he knew after all. (Notwithstanding that we sometimes talk this way, as when somebody says, “I knew it!” But I think this use of “know” is an exaggeration; it isn’t literally true.) The speaker’s epistemic situation in these cases is insufficient for knowledge; that’s just why he says “believe” instead of “know” in the first place. This fact doesn’t get changed by a later verification of the fact in question.

        So it seems there is an independent functional role for belief, after all. But belief in this sense is still not a component of knowledge, and knowledge is not successful believing.

        There may be borderline cases between believing in the legitimate sense and knowing, but I think the difference is still pretty strong. It’s easy to name things with which we have very secure cognitive contact, and therefore know. I know I am in Oakland, I know I am awake, I know I have two hands, I know that Obama is president, I know that George Washington was the first president, I know that I own a Ford (whatever Jones may own), I know that I live on a fairly steep hillside, I know the earth is round, I know my mother loves me. It would actually be odd to say I “believe” any of these things. It would suggest that I have discovered some special reason for doubt, after all. Whereas none of the belief examples above is really very secure. However great the evidence for them may be, none of them has an epistemic probability of being true greater than, say, .999 at the most, which isn’t really very great, if you think about it. Thousand-to-one shots happen all the time. The epistemic probabilities of the examples of knowing I just gave are all much higher than that, by many orders of magnitude I would say. Notice, for instance, that none of them could turn out to be false without also overturning an enormous chunk of my worldview. So, if one of them did turn out to be false, we might correctly say that I believed it to be true, and thought I knew it to be true, but that I didn’t really know it after all. This is the sort of case where “belief” steps in to fill the conceptual gap when knowledge fails. But this isn’t the same sort of belief as in the previous cases. This is belief as failed knowledge.

        I definitely inherited a set of strong prejudices in favor various realisms from Ayn Rand, but that was too far in the past to still have much if any effect. I arrived at my present view about knowledge, I think, because Williamson’s writings made so much sense once I had been convinced of the truth of representational direct realism in sense-perception, mainly by reading Mark Johnston, Fred Dretske, and A. D. Smith, and then writing a dissertation on the topic. Prior to this, I went through a lot of fluctuations, I’m afraid! I personally find it odd that so few philosophers ever change their minds about anything significant. I’ve had to change mine about many things.

        Liked by 1 person

        • I too find it strange that so few philosophers ever change their minds about anything significant. I find it even stranger that the few who do are routinely abused for it. I can’t remember how many times I’ve heard someone make fun of Hilary Putnam for changing his mind so often. It’s as though changing your mind once in a really big way like Kant or Wittgenstein is ok, but anything more than once is no good, and if your change of mind doesn’t produce a whole movement in philosophy, you’re just a loser.

          I suspect the institutional structure of academic philosophy has something to do with this, though no doubt there are other factors. Probably there are ordinary human motives and specifically philosophical motives for avoiding at least the appearance of changing one’s mind; but once changing one’s mind becomes a professional liability, it’s little surprise that becomes much more rare.

          Meanwhile, I would just like to make up my mind for once.

          Liked by 3 people

          • I find it puzzling that philosophers talk like they are dead-certain that P – until they change their mind, at which point they talk like there are dead-certain that not-P. In tone, this is the affectation of professorial authority or something similar. I catch myself taking this tone embarrassingly often. I generally prefer more tentative, jury-is-still-out language and tone.

            Like

          • I don’t find the phenomenon in question strange at all. The answer, I think, has to do with how philosophers conceive of the task of inquiry. In other words, what is their paradigm inquirer? Is it a relatively theoretical person or is it a relatively practical person?

            There are certain practical endeavors that require complex inquiries, and require that you get the inquiry right the first time. There is little margin for error, and there are high costs for getting things wrong. And yet the inquiries in question are undertaken in high-pressure contexts that are highly conducive to error.

            Think of criminal justice, medicine, mental health, or safety inspection work. In each of these lines of work, you face a practical problem that requires a relatively quick solution, where the solution itself depends on a highly complex quasi-scientific inquiry. There is pressure to get the correct answer, but circumstances are such as to incentivize error. Is the defendant guilty? Is the tumor malignant? What DSM-5 diagnosis should the patient get, and what medication if any should she be on? Is the aircraft flight-worthy or should it be grounded? Etc. To do the job competently, you must produce a determinate answer to the question you face–within relatively severe time constraints, and under relatively arduous conditions. Nonetheless, if you hedge your answer (or hedge it too much), you’ll be considered derelict in your duties or even legally liable for some sort of malfeasance. Hedging is not a luxury you’re permitted to afford. You can’t (decently) say, “Well it seems very plausible to me that the defendant is guilty, but I could be wrong, and if I’m wrong, I’ll revisit the issue in a few years and reverse my judgment. No big deal.” And so on, mutatis mutandis, for the other examples I gave (and similar sorts of examples).

            If you think of philosophy by analogy with jobs like the preceding, you’ll be loathe to change your mind about anything. The imperative will be: conduct the inquiry as though someone’s life depended on the outcome. You will tend to be very careful (and legalistic) about the positions you take, carefully avow some position that seems absolutely clear, and then dig in your heels. The alternative (on this view) is to treat philosophy as a sort of frivolous game. But if it is not a game–if it is a deadly serious business, by analogy with law or medicine–then it should be practiced accordingly. So the pattern will be: arrive deliberately at a settled view, be sure that it’s right, bank your reputation on it, and treat any significant changes of mind as a kind of personal defeat.

            Philosophers who are laid-back about changing their minds have a different conception of philosophy. For them, philosophy is a lot more like a game or a performance than it is like law or medicine. If you screw up a game or performance, the ethos of athletics or music says: “Don’t cry about it, just do better next time. It’s not like you’ve killed anyone.” So it is with the philosopher who avows that p and then turns around and avows ~p a few years later. Her attitude is liable to be, “Once upon a time I believed p. Now I don’t. So what? I changed my mind. I have the right to do that, don’t I?” (Cf. Jennifer Lopez: “I used to have a little. Now I have a lot. I’m just, I’m just, Jenny from the block.”)

            The problem with the first view is that it can lead to repression and dogmatism. But the problem with the second view is that it can lead to a repugnant sort of frivolity, where inquiry becomes indistinguishable from “play.” It’s not easy to find the mean between these two models. But I see the attractions of both–and the problems with them, too.

            Liked by 1 person

          • The trouble I see is that many philosophical problems are not ones that inherently demand that we come to some firm, settled conclusion, and yet these are very often precisely the problems on which professional philosophers do not change their minds. It is only worse when those very same philosophers seem to regard philosophy as something like a frivolous game, just one in which the rules are such that the player who changes his mind the least wins. I agree that both of the approaches you describe are problematic and that it’s hard to find the mean between them (my younger self veered much too far in the former direction; these days I probably lean too far in the latter). But it doesn’t seem too difficult in principle to see how one can take inquiry seriously without becoming rigidly dogmatic (which I am inclined to describe as a way of not taking inquiry seriously) and how one can be duly cautious and modest (and, I would say, intellectually honest) without indulging in frivolity. Of course, getting it right in practice is harder than just envisioning it in the abstract. But external pressures, whether of professional success or social reputation or whatever, seem to make it harder even to aim at the right goal, since they can incentivize defending a view against all challenges and disincentivize caring about truth as such. If philosophy were done primarily by people who do not care about reputation or need to worry about their publication records, I suspect it would look rather different. I don’t mean to suggest that it’s entirely a product of contemporary academic institutional practices and structures — there have been plenty of other practices and structures in the history of philosophy that have led to broadly similar results, and the attractions of dogmatism are strong for most people for a variety of reasons — but it does seem to me that philosophical rigidity is not often driven primarily by considerations internal to philosophical inquiry.

            Like

          • It isn’t difficult in principle to see how to avoid rigid dogmatism and frivolous self-indulgence, but it’s more difficult in practice than most people realize. Yes, there are external pressures that deform inquiry, but it seems to me that the internal pressures are more philosophically interesting, because they’d remain even if you solved the problems posed by external pressure.

            The rigidity that arises from conceiving philosophy on the model of a practical discipline is very difficult to avoid in the parts of philosophy that are supposed to be practical–applied ethics, political philosophy, philosophy of law. These sub-disciplines are supposed to guide practice. But in the nature of the case, a guide to practice has to be relatively self-consistent and stable. If philosophers change their views too frequently, they lose their credibility as guides to practice. The price of that loss of credibility is loss of the idea that philosophy can in fact guide practice. But if it can’t, then it’s a question what these sub-disciplines of philosophy think they’re doing.

            A detective couldn’t be taken seriously if she regularly decided that X should be arrested for rape, then regularly decided, six weeks into the prosecution that, well, on reconsideration maybe X is 100% innocent. Or imagine a pathologist who regularly got biopsies wrong, but had the “intellectual courage” to admit it–except that “getting it wrong” happened with every other biopsy. “You know that last biopsy, the one I said was malignant? Well [chuckle], I looked at it again–and it’s benign! Sorry about that pointless surgery…I know, I know, this keeps happening…” Then there’s this conversation (and its kin):

            Train Conductor: Is the train safe enough to depart the station?
            Inspector: Yeah.
            [One hour later]
            Inspector: I changed my mind. It wasn’t safe enough to depart. Actually, I kind of think it’s going to crash. I realize this is a 180 degree reversal on my original view.
            Train Conductor: But we’re an hour into the route, you asshole! You keep doing this! You did it last week, and the week before!
            Inspector: What, I don’t have the right to change my mind? If applied ethicists can change their minds, why can’t I? I don’t even get paid as much as an applied ethicist.

            If determinacy and counterfactual/temporal stability are necessary conditions of practical knowledge, but individuals philosophers can’t manage to achieve them, we seem to be led to the conclusion that philosophy can’t produce practical knowledge (knowledge that guides practice). But if it can’t, why insist on philosophical sub-disciplines that purport to guide practice (as we do)? Maybe the guidance that applied ethicists (etc.) give is less direct than the sort of guidance a detective, a pathologist, or a train inspector is required to give on the job. But it has to have some practical relevance, and to the extent that it does, it’s going to have to approximate the sort of determinacy and stability that practical inquirers regularly produce. If it can’t, it has to admit that, and beat a retreat from practical life–something philosophers are loathe to do.

            Consider a different kind of case. Suppose that I’m a political philosopher, and I produce a treatise arguing that X is the ideal political system. If this treatise is supposed to guide political action (and what else is it supposed to do?), its claims have to be good for at least as long as it takes to realize X in the real world. The presumption is: if you’re arguing that X ought to be brought about, your advice should hold good for as long as it takes to bring X about. Well, it takes a long time to realize any political ideal in the real world. But if the treatise is written on the presumption that the author may well come to the conclusion, within five years, that in fact ~X is the ideal political system, how credible could the treatise be as a guide to political practice? Take a non-academic reader with philosophical aptitude and political ambitions who’s trying to decide whether or not to read the book. If he knows that every significant claim in the book might well be disowned by the author in five years, what’s the point of reading it? If the author himself admits that, hasn’t he (the author) made a confession of failure?

            Example: think of Nozick’s views on libertarianism. In 1974, he’s a libertarian (Anarchy, State, and Utopia). By 1981, he seems not to be (Philosophical Explanations). By 1989, he says he’s not (The Examined Life). By 2001, we have no idea where he stands (his final interview). You could call that “a refreshing propensity to change one’s mind.” But you could also call it epistemic failure–coupled with a confusing inability to be clear about matters of life and death (Nozick’s libertarian views entail a radical privatization or retrenchment of the welfare/regulatory state, which could be a matter of life and death for some people).

            In practical philosophy, I think it’s advisable to avoid Nozick’s predicament. But in practice, it’s hard to avoid that predicament without either becoming or seeming to become dogmatic.

            Like

          • Of relevance: this article has been making the rounds. I certainly agree with them as far as they go, but it’s pretty depressing to learn that all that philosophers can add to policy debates is a sense of the right questions to ask, that they have nothing to offer in the way of answers, and that (as things stand) they aren’t even up to the task of asking the right questions. “If trans-science is our new ideal,” the authors say, “then Socrates is back in business.” True enough, but the epistemic equivalent of burka-level modesty.

            Like

          • David R. & Irfan: what I had in mind was more the internally-prompted motivations to be authoritative, certain, etc. I suspect that being authoritative is just a dominant model for what it looks like to be smart. Some people have a different model – for example, the model of being the guy who makes good points, objections, etc. (I find this model, not authority-mongering plus mind-changing model, to be more typical of the oh-how-smart-I-am-motivated intellectual “treating it all like a game”). In any case, if you are really committed to feeling good about yourself by showing the world and yourself how smart you are, and you accept the being-authoritative model of what it is to be smart, you will be something of an authority-monger (whether or not you change your mind much). Lots of people who are terrible at reasoning but good at having intuitions/opinions operate with the authority-model of smarts (or of simply seeing what is true and right, smartness be damned; think cult and religious leaders). But I appreciate that there can be local or global cultural and institutional factors that encourage such motivations, models, etc. (hence, I sometimes reference the cultural/institutional phenomenon of there being a “clever derby” going on in some seminar, colloquium, context of discussion, etc.).

            Irfan: you are certainly onto something regarding what I would call directly-decision-making-relevant inquiry. All practical contexts require decision-making and it can be unnerving to make decisions from a stance of uncertainty. Ultimately, if we are honest, I think we have to live with the uncertainty most of the time – though deciding is definitive (and often desirable or even necessary), our basis for deciding often is not. However, defending a principle that might be used in making a decision (say, a principle that says that anyone all-in-ought to do this or that in a circumstance of a certain type) is not the same as making a decision or drawing conclusions that directly rationalize decisions. And this is important. Practical philosophy, no matter how specific, is not much in the business of making decisions or defending propositions that directly rationalize the resolution of any specific decision-situation. So I wonder about the import of your point about practical or decision-making necessity regarding how deeply philosophers should dig in their heels when they defend various their conclusions (whether the conclusion is an all-in-ought-relative-to-a-context proposition, some proposition about practical reasons, or some proposition that expresses decision-relevant descriptive information. Sure all philosophical propositions have some (possible) practical import – they have some affect on how one should decide some (possible) matter or other. But wouldn’t it require a clear relationship to really important actually-likely decision-making situations for the practical necessity of making decisions (and making the right one) to “transfer” the urgency/importance to coming to definite conclusions (and, ideally, having relevant specific-enough knowledge)?

            (The most plausibly true proposition I can think of in the vicinity is the idea that the necessity of decision-making (and perhaps other factors) drives the need to mostly treat belief as an all-or-nothing affair (with “I have no idea whether P or ~P” as a last resort) rather than as a matter of degree (credence, degree of credence). But I don’t think that this is exactly your point. There is also a pretty substantial literature on how and why practical stakes can effect what constitutes definitive evidence for or against a proposition.)

            Some elements of my positive view: from the standpoint of a truth-seeking theoretician, I find the following to justify a pretty skeptical, open style of argumentation and presentation: (a) the in some sense globally more-accurate information content of sets of credences as against analogous sets of all-or-nothing beliefs and (b) something like inference to the best explanation as the most important – but not simple or cut-and-dried – epistemic standard in philosophical reasoning (and most reasoning).

            (Another, perhaps somewhat tangential point: I find that even when people appeal to the practical – particularly moral – importance of getting it right about whether some proposition is true or false, this appeal is often specious because what is really at stake for them is a personal investment in a certain way of holding others to account – not the pattern of attitude/behavior in question either (i) being some fundamental rupture in the moral relation of trust and respect to another or (ii) making some social disaster significantly more likely (if enough others copy it and if it is likely enough that they will). Very often, there is some element of biased motivation and mere rationalization in the operation of “the righteous mind.” The personal/parochial/power-struggle-winning motivations, though often somewhat opaque to the agent, often seem to be what drives exaggerated reasons of extreme moral viciousness and near-certain social disaster being associated with the target attitude/behavior pattern. In this way, I find highly moralized stances to be both harmful and often to instantiate some degree of bad will and irrationality. I often wonder whether appeals to the practical or moral importance of concluding/deciding – especially regarding moral matters – instantiate this pattern that I have just argued is meaningfully teleologically and procedurally flawed.)

            Like

          • Just speaking as someone who cares about philosophy, I want to get it right. Therefore I want to be the sort of person who will revise his beliefs if powerful arguments require it. If this means that I can’t write a book like Anarchy, State, and Utopia, so be it; I would never have written a book like that anyway. But let’s be fair to Nozick here: he didn’t write a book proclaiming the truth, he wrote a book that presented a series of arguments for the consideration of intelligent readers. Perhaps I underestimate the dogmatism of ASU, but I don’t think so. In any case, if I were to write a book like ASU, I’d do it in the spirit of offering what struck me as the best arguments on the topic in the hopes that intelligent readers would offer their responses to it, ideally leading me either to reaffirm its thesis if it struck me as surviving objections and to reject it if it struck me as falling to those objections. I’d like to think, anyway, that if I were at all embarrassed to change my mind in the face of criticism, it’d only be because I was immoderately attached to my professional reputation. But I, at least, actually admire Nozick more for changing his mind than I would have if he were like the very many libertarians who, having once met an argument that convinced them, closed off their minds to all further inquiry.

            Yay Nozick. If he couldn’t sustain it, it isn’t sustainable. That’s a philosophical lesson, isn’t it?

            Liked by 1 person

          • Nozick’s view on what he was doing in ASU is instructively incoherent. Here he is in tentative exploratory non-authoritative mode at the beginning of the book:

            My emphasis upon the conclusions which diverge from what most readers believe may mislead one into thinking this book is some sort of political tract. It is not; it is a philosophical exploration of issues, many fascinating in their own right, which arise and interconnect when we consider individual rights and the state. The word ‘exploration’ is appropriately chosen…[followed by three sentences of hedging]. There is room for words on subjects other than last words.

            Indeed, the usual manner of presenting philosophical work puzzles me. Works of philosophy are written as though their subjects believe them to be the absolutely final world on their subject. … (p. xii).

            Here is Nozick at the end of the book:

            The minimal state treats us as inviolate individuals, who may not be used in certain ways by others as means or tools or instruments or resources; it treats us as persons having individual rights with the dignity this constitutes. Treating us with respect by respecting our rights, it allows us, individually or with whom we choose, to choose our life and to realize our ends and our conception of ourselves, insofar as we can, aided by the voluntary cooperation of other individuals possessing the same dignity. How dare any state or group of individuals do more. Or less. (pp. 333-34).

            It’s a picture-perfect example of a failure to resolve the tension I noted–and failing that, of trying to have things both ways at once.

            The first passage gives us the impression that Nozick is the serenely aloof theoretical philosopher, engaged in “explanatory political theory,” and searching tentatively for a reduction of the political to the non-political (p. 6). If he gets that wrong, what’s the big deal?

            The second passage suggests that both anarchy and a more-than-minimal state are unjust, and involve an unjust presumption in the use of coercive power. The clear implication is: don’t be like that. It’s a paradigm of practical advice, and it can’t escape the principle I came up with: if you’re giving political advice, your advice should hold good for the duration required to realize the advice in the real world. If not, you can’t be regarded as a credible advice-giver.

            Whatever Nozick’s initial hedges in the Preface, Anarchy, State, and Utopia was obviously intended as offering real-world advice, and was correctly taken that way by libertarians and conservatives, who regarded it as the blueprint of their efforts on behalf of “a free society” modeled on his prescriptions. Nozick’s tendency to zig-zag between tentativeness and dogmatism doesn’t change the fact that the book was intended to prescribe. It just highlights Nozick’s failure to get clear on what he was doing. And he’s not alone. There’s a lot of unclarity about what philosophers think they’re doing when they prescribe to the world. One way to escape that problem is to disclaim any pretensions to authority and stop prescribing, but the price of that escape is our (self-professed) irrelevance to the world beyond philosophy. It’s a high price to pay.

            Like

          • Otherwise put, why should my goal in doing philosophy ever be to present myself as an authority? Why shouldn’t it be to present the best arguments as I see them? Why shouldn’t I sincerely be happy, as Socrates said he would sincerely be happy, to be refuted, if I am actually refuted?

            Again, I don’t doubt that there are factors other than academic professionalization and ignoble concern for reputation that lead most of us, at one time or another, to take a different attitude. But if we were truly free people, free from the external necessities of professional advancement and the internal necessities of honor, praise, and reputation, would it not be otherwise?

            Like

          • My point isn’t so much that your goal in doing philosophy should be to present yourself as an advice-giving authority. It’s that if you want to do that, and be regarded as credible, you can’t change your mind in the way that Nozick or Hilary Putnam did. You cannot, a la Putnam, be a Maoist one day, then get embarrassed by your Maoism, hope that the stories of your more bizarre exploits get buried in obscurity, and then wonder why people make fun of your Maoist stage. Nor can you, like Nozick, pound the table for the minimal state, then change your tune when you write your next book. You have to be more like Rawls or Dworkin: one way or another, you have to defend some version of the same message for the duration of your career.

            Of course, you may not want to present yourself as an advice-giving authority figure. And I’m not saying you should (or anyone should). I’m just noting the correct inference that would follow if everyone in the profession took that position: the profession would become prescriptively irrelevant to the world beyond itself.

            Socrates solved this problem by serving merely as a dialectical critic of existing belief and practice–he disclaimed prescriptive knowledge (mostly), but claimed critical authority (while asserting a commitment both to Athens and to Apollo). That is one way of solving the problem, but it’s both too modest and too ambitious for most philosophers today. Too modest: it’s merely critical or dialectical. Too ambitious: it presupposes a much better knowledge of the practical world than most philosophers think they need to have. Most academic philosophers are not particularly interested in technai, and are not in a position to examine expert practitioners of techne in the way Socrates was. They simply don’t conceive philosophy along Socratic lines anymore. Maybe that’s explainable by external necessities, but maybe it’s just a change of intellectual orientation. I don’t know. My point is that the choice between the two models of philosophy I mentioned is fundamental, and hard to make. There are good arguments on either side of the choice, including the practical one.

            Like

          • I somehow missed your response earlier, so I’m late to the game. I don’t have detailed views on Nozick, but I think your characterization of his attitude as incoherent is off the mark. Its as if you were offering him (and, by extension, any philosopher who writes on related topics) two options: either commit yourself to the theory to the point of dogmatism or don’t write on topics that have practical implications without couching it all in the language of “perhaps, maybe, could be.” I see no tension, let alone incoherence, between the two passages of Nozick you quote. I take him to have been confident in the arguments and conclusions he offered, and to be attempting to present them in a provocative way. But that is consistent with ASU not being a political tract intended to persuade people toward concrete political action. The point, I take it, is not that the philosophical view does not have any implications for political action. It is, rather, that the book sets out to argue for a particular view in political philosophy, not to advocate concrete policy, and that it does so in the spirit of offering arguments for further debate, not as an attempt to silence the opposition or outmaneuver it in gaining the reins of political power.

            But the particular case of Nozick isn’t really what I’m concerned with, and I don’t know his work well enough to have a very solid view on it. Similarly, I think Putnam’s political advocacy falls outside the scope of what I have in mind; for one thing, I don’t regard him as ever having had anything significant to say about politics, and certainly his flirtation with Maoism does not inspire confidence. I have in mind things like his more or less inventing functionalism and then rejecting it, his movement away from metaphysical realism and back to it, etc. I have some definite sympathies and antipathies in those areas, but Putnam’s arguments at each stage were typically good and interesting – in the way that flawed arguments can be interesting precisely because they reveal a flaw – and it was better that he made the new arguments rather than making a bunch of chess moves to defend his old views because he didn’t want to lose face. Putnam was quite right, I think, to make strong and confident arguments at each stage despite repeatedly changing his mind, because the arguments he offered at each stage were the best he could see, had not been refuted so far as he could see, and were put forth in the spirit of having them subjected to criticism. I suppose one might think that the particular arguments Putnam offered were all pretty bad at most stages; I would disagree, but the point isn’t so much about Putnam as it is about the possibility of doing philosophy in a way that is neither irrationally committed to defending a view with the fewest significant changes nor lacking in any kind of confidence or nauseatingly wrapped up in layer upon layer of “well, maybe, possibly, let’s see.”

            As I see it, anyone who offers a philosophical argument worth offering must either be genuinely open to the possibility that he is mistaken or be intellectually dishonest with himself. But genuinely acknowledging one’s own fallibility is consistent with being sincerely convinced that some view must be right because you cannot see any reasonable grounds to think otherwise. Perhaps Putnam and Nozick really weren’t like that; perhaps they were in fact dogmatists who just kept changing their dogma. But I don’t see any reason to suppose that it is impossible to be sincerely convinced and genuinely open to one’s own fallibility, or even that it’s an extremely rare attitude.

            (I’m trying to display that attitude right now, insofar as I both think I must be right about this and am trying to acknowledge my fallibility by discussing it in a style of argument designed to allow you or anyone else to show me why I’m wrong if I am).

            Coincidentally, I can’t help but express my surprise that you would cite Rawls as a contrast, given his famous change of mind and embrace of ‘political liberalism.’ Perhaps I have less sympathy for Rawls than for Nozick or Putnam because I think Rawlsian political liberalism is a huge step in the wrong direction from his earlier work, whereas I think the other two improved. But perhaps your point is that the later Rawls does not repudiate much if any of the practical consequences of ToJ Rawls. Fair enough.

            Like

          • Michael,

            I guess I would first contest this characterization of applied ethics (and other applied disciplines in philosophy):

            Practical philosophy, no matter how specific, is not much in the business of making decisions or defending propositions that directly rationalize the resolution of any specific decision-situation. So I wonder about the import of your point about practical or decision-making necessity regarding how deeply philosophers should dig in their heels when they defend various their conclusions (whether the conclusion is an all-in-ought-relative-to-a-context proposition, some proposition about practical reasons, or some proposition that expresses decision-relevant descriptive information.

            I don’t think the first sentence is true. Just look at some standard bibliographies or reference works in applied ethics (bioethics, business ethics, etc.) They’re very much in the business of prescribing for first-order decisions. Here’s Oxford Bibliographies. Here are the search results for “applied ethics” at the Routledge Encyclopedia. Think of “The Philosopher’s Brief” of yesteryear, or the BHL brief of a few weeks ago, or Peter Singer’s devoting a whole book to the case against voting for George Bush in the 2004 election. Philosophers like Nagel, Nussbaum, Dworkin, and Peter Singer became famous as public philosophers for arguing for very specific policies. Even when a philosopher defends what seems a more abstract claim, it has some very particular consequences. If Macalester Bell is right that it’s sometimes appropriate to feel contempt for people, then sometimes we ought to feel contempt for people. That’s not a practically trivially thesis (it could require a re-orientation of one’s whole psychology), and she gets pretty detailed about when it’s appropriate and when not. I think examples could be multiplied ad nauseam.

            The point I was making was not so much about uncertainty as about counterfactual and temporal stability (as well as determinacy): if you are going to offer practical advice, the advice cannot constantly change. This is not necessarily advice about what you should do if you happen to change your mind. I am not saying: if you have recommended that p, but become persuaded that ~p, dig in your heels and pretend to assert that p anyway. I am saying: Let p be a philosophically significant prescription. If you are the kind of person who regularly finds himself in the predicament of oscillating from the belief that p to the belief that ~p, feel free to change your mind when and as that happens, but the fact remains: you’re not a credible advice-giver. Insofar as philosophers want to be credible advice-givers, they have to do better than that. Maybe they can’t do better, in which they case they certainly shouldn’t bluff, but like it or not, proficiency-in-prescription-giving entails counterfactual and temporal stability in the prescriptions one offers, plus determinacy. Either we can deliver or we can’t, and if we can’t, our relevance to the world beyond academia is very much open to question.

            Stability can be made compatible with uncertainty. You might be uncertain in the same way about the same issue for a long time. You might even think that uncertainty about that issue is the best we can do. Uncertainty can also (in a certain sense) be made compatible with determinacy. You can avow a proposition that makes clear what you claim to know and what you cannot claim to know. The relevant point, though, is that if you’re going to give practical advice, you have to say something action-guiding despite your uncertainty. (Medical and quasi-medical diagnoses and therapies are a paradigm of this.) If you face options A, B, and C under conditions of limited information (and limited capacity to process the information), it is one thing to claim uncertainty about the ordinal ranking of A relative to B and C, but if uncertainty becomes a ready excuse for insisting that the jury is perpetually out, you’re not in a position to offer advice. Maybe philosophy is not in such a position (in which case we should be honest about it), but if it’s not, the price is irrelevance (in which case we should be honest about that).

            So my answer to this question….

            But wouldn’t it require a clear relationship to really important actually-likely decision-making situations for the practical necessity of making decisions (and making the right one) to “transfer” the urgency/importance to coming to definite conclusions (and, ideally, having relevant specific-enough knowledge)?

            …is that sometimes the transfer condition is clearly or directly satisfied. And where it isn’t clearly or directly satisfied, it can be satisfied in some indirect or non-obvious way (that requires spelling out). The relationship of Nozick’s libertarianism to American politics is not obvious or direct, but it’s not negligible, either. Trivially, if the minimal state is the only just political regime (which is what Nozick believes), then if we value justice, it seems to follow that we ought to take those measures (whatever they are) that tend to promote the minimal state. Maybe doing so is very complicated. (Nozick thought it was very complicated.) But it can’t be wished away, either. Evaluative and prescriptive propositions aren’t motivationally inert truths that can be avowed and then ignored. They have to be acted on. The only way to escape this feature of practical philosophy would be to abolish practical philosophy and retreat to non-prescriptive theorizing.

            On the elements of your practical view:

            (a) the in some sense globally more-accurate information content of sets of credences as against analogous sets of all-or-nothing beliefs and (b) something like inference to the best explanation as the most important – but not simple or cut-and-dried – epistemic standard in philosophical reasoning (and most reasoning).

            A lot turns on how we read (a). Practical decisions require a choice between at least two options. If S faces A or B, then at some level one option has to be more choiceworthy than the other. Is that an all-or-nothing belief? It depends on what you mean by that. My point was really that in practical philosophy, you can’t avoid claims about choiceworthiness, and it’s highly desirable that your claims be stable over time.

            I wouldn’t disagree with your last paragraph. I’d just say that it has its equal and opposite counterpart on the other side of the ledger: sometimes people–philosophers in particular–theorize in order to avoid engagement with the world. They spin out theories because theorizing feels safer than having to make decisions with real consequences, and they isolate themselves from practical commitments in order to generate an illusion of maximal autonomy. They sometimes comfort themselves with the claim that they’re disinterestedly seeking truth when what they’re doing is seeking an escape from reality. The political economy of higher education often rewards that set of rationalizations, and because it does, people’s livelihoods come to depend on a belief in it. That’s just as problematic as “the righteous mind.” The problem just takes a different form.

            Like

        • To David P:

          That gets at part of my problem, but not all of it. I guess I would contest (and query) the rationale for the analogy between belief and appearance. You say:

          I suggest we look at the relation of knowledge and belief analogously to the way I have outlined the relation of perception and appearance.

          Why? And I mean why on both counts. In other words, why think of all knowledge by analogy with perceptual knowledge, and why think of belief by analogy with appearance?

          On knowledge and perception: it’s true that knowledge qua knowledge tracks facts in a world that exists independently of our cognitive activities, but arguably, perceptual knowledge is relatively automatic and higher-order knowledge is not. It involves something like voluntary assent to propositions. “Belief” has traditionally been conceived as a assent to a proposition. That gives the concept a rationale, and also explains why it has no perceptual-level analogue. There is no analogue, at the perceptual level, of assent to a proposition.

          On belief and appearance: I would put the disanalogy here more strongly than you did (in your second comment). You give examples in which “the speaker expresses confidence that something is true while acknowledging that he doesn’t know.” But it seems to me that a speaker can use the term “belief” to express confidence in the truth of a proposition while insisting that he does know.

          Imagine a courtroom or legal deposition in which the witness is asked whether he believed he was doing or not doing X at some time t–where doing or not doing it (at that time) and believing that you were doing it (at that time) was material to some important aspect of the case. If the question is “Did you believe you were doing X?” the witness could intelligibly answer “I did believe I was doing it.” His point is not merely that he was doing it, but that he self-consciously believed at the time of the occurrence of X that he was doing it. And if that’s true, he’d have knowledge.

          In other words, I don’t think the word “believe” has the sort of hedging function that “appearance” has. When a person says “I believe that p,” I don’t think that there’s any implication that because he merely believes that p, his belief is somehow analogous to a perceptual appearance. He could believe p with full conviction and with full epistemic warrant. In that case, it seems to me, the analogy with appearance doesn’t hold.

          Liked by 1 person

          • Irfan,

            I’m going to do what I usually abhor, which is reply point-by-point to your comment. The practice is abhorrent because it is usually just adversarial and fails to address whatever larger points are at issue. But in this case, you make a series of interesting points, and I really think this is the most efficient and productive way of responding.

            Why? … In other words, why think of all knowledge by analogy with perceptual knowledge, and why think of belief by analogy with appearance?

            There is no reason a priori why knowing and perceiving must be parallel. They are both forms of awareness, which is suggestive, but I’m not saying it has to be so. It’s just a suggestion I think turns out to be productive.

            On knowledge and perception: it’s true that knowledge qua knowledge tracks facts in a world that exists independently of our cognitive activities, but arguably, perceptual knowledge is relatively automatic and higher-order knowledge is not. It involves something like voluntary assent to propositions. “Belief” has traditionally been conceived as a assent to a proposition. That gives the concept a rationale, and also explains why it has no perceptual-level analogue. There is no analogue, at the perceptual level, of assent to a proposition.

            Indeed, belief has traditionally been conceived—by philosophers—as assent to a proposition, but in my view wrongly so. You cannot believe something simply by assenting to it, as William James pointed out a long time ago (“The Will to Believe”), and as Bernard Williams pointed out more recently (“Deciding to Believe”). James cites it as just an empirical fact. Williams gives an argument: suppose you could believe something by simply deciding to; then you would know that a mere decision was the source of your belief rather than a connection to the truth; but this knowledge would necessarily undercut and destroy your belief-by-decision. Belief cannot withstand knowledge of contrary evidence or reasons, nor can belief be prevented in the face of known evidence or reasons. Belief is not voluntary at all. What is voluntary is the line of thought we pursue, what we choose to examine, which arguments we scrutinize for logical flaws (because their conclusions are disagreeable), which arguments we don’t question (because their conclusions are welcome), etc. But the process of knowing itself, of responding cognitively to evidence and argument, is relatively automatic, as is the process of perceptual judgment. So I think the parallel between perception and knowledge holds up.

            Imagine a courtroom or legal deposition in which the witness is asked whether he believed he was doing or not doing X at some time t–where doing or not doing it (at that time) and believing that you were doing it (at that time) was material to some important aspect of the case. If the question is “Did you believe you were doing X?” the witness could intelligibly answer “I did believe I was doing it.” His point is not merely that he was doing it, but that he self-consciously believed at the time of the occurrence of X that he was doing it. And if that’s true, he’d have knowledge.

            Actually, I think the example is odd. Unless there is some reason to doubt whether the witness was doing X, it would be strange to ask whether he believed it rather than knew it. Suppose the witness gave somebody poison unwittingly, thinking it was just water. Then one might ask,

            “Did you realize you were poisoning so-and-so?”

            “Not at all! I thought I was giving him water.”

            “So you believed you were giving him water?”

            “Right.”

            “What made you think that?”

            “I poured it from an Evian bottle.”

            “So you believed it was an Evian bottle?”

            “Why? Wasn’t it?”

            “It was, but philosophers say you can use ‘believe’ just as well as ‘know’ in this context.”

            “Philosophers are idiots.”

            (Present company excepted, of course!)

            One last note. I am saying there are at least two senses of “believe,” the hedging sense and the sense of failed belief. “Believed” in the third line is used in the failed belief sense, not the hedging sense.

            Like

    • Thanks for those comments, David P. Apologies that it has taken me awhile to reply.

      When something is functionally constituted, the ‘something’ need not itself be functionally constituted. So appearance (or perception-type representation) does not have to be constituted by any functional role. It might be something, a type of internal neural state of an organism, that comes to be suited for accurately representing certain aspects of the organism’s typical surroundings or environment. These considerations suggest that perception is constituted by successful perceptual representation (representation that is true and true due to standard processes occurring in the conditions for which they are adapted – so that the truth is not a matter of accident). Similar considerations apply to belief and knowledge perhaps – though I suspect that belief is itself functionally constituted relative to something like neutral conceptual representation (conceptual representations that need not be asserted). But I don’t think that the point about belief here makes a difference in evaluating the account implicit in the Carter et al essay (the point is just that, however beliefs is constituted, you get knowledge via the functional success of believing).

      Also: failed (putative) knowledge would be explained by something other than knowledge – presumably belief – failing to meet its success-conditions (not vice versa). And this seems to imply that knowledge is constituted precisely by belief meeting its success-conditions.

      However, metaphysics aside – speaking only about the possible orders of understanding – I think we can perfectly well understand belief (fix reference on the right thing via properties that it actually has) by characterizing it as failed knowledge.

      Related metaphysical claim: as knowledge is constituted by beliefs meeting their success-norms, so belief is constituted by having or being subject to the relevant norms of truth and rationality or truth-via-rationality – whatever, metaphysically, functional norms or being subject to them comes to. In this way, relative to a set of functional norms, tokens instantiate merely material types in instantiating having-functional-norm types. The main point of my original commentary was that, in the case of the relevant epistemic types, the meeting-relevant-functional-norm type [knowledge] is distinct from the having-functional-norm type [belief] – and that this seems like the minimal, and perhaps most essential, claim of the broadly knowledge-first program. But maybe this is not how most knowledge-firsters think of their program.

      Here’s what my minimalist knowledge-first-ish claim supports: for anything other than the fine-grained constitutive-explanatory story that foundational epistemologists and metaphysics-of-knowledge philosophers are interested in, we do well to start with what is (or is typically) known or not, what an organism is (or is typically) perceptually aware of or not. It certainly seems right that evolutionary pressure would not favor mechanisms of belief-generation apart from those mechanisms producing knowledge (or mechanisms of perceptual-appearance-generation apart from these producing perceptual awareness).

      I don’t think I have met all of your points squarely, but I’ve tried to oppose a certain central thread of your (and perhaps Williams’) knowledge-first picture.

      Like

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s