Could Hilary Putnam Have Been a Brain in a Vat?: Three Arguments against Reference, Part 1

In previous posts (such as this one and this one), I have sometimes alluded to the philosophy of structural realism. Structural realism says that we are unable to know the intrinsic character of the world outside our minds, although we are able to know a great deal about the structure of that world, especially its causally relevant features. Thus, we can know what we need to know to survive and thrive in our environment, we just can’t know what it is like intrinsically. For instance, we cannot know whether the surfaces of objects have the colors they appear to have in our visual perceptions of them or the hot and cold qualities we feel them to have, etc. Even the intrinsic character of spatial relations may not be as it appears to us. Still, the structure and dynamics of all these things is accessible to us—which is fortunate, because that is what matters for successful action.

I think structural realism is true and indeed inescapable. However, discussion of it in philosophy today is blighted by obsession with something called “Newman’s Objection,” after Max Newman, a Cambridge mathematician who published an important critique of Bertrand Russell’s version of structural realism as advanced in Russell’s book, The Analysis of Matter (1927). In my view, Newman rightly identified an important flaw in Russell’s structural realism, but not in structural realism per se, which has many options available for removing the difficulty. Unfortunately, many philosophers today, including many structural realists, treat Newman’s Objection against Russell (and subsequent formulations essentially like Russell’s), unless it can be refuted somehow, as a decisive refutation of structural realism itself. The result has been a lamentable lack of progress in developing the implications and insights of structural realism.

In what follows, I will explain how I think Newman’s Objection should best be handled and why it is a paper tiger. However, I have chosen to do so via an analysis of a much more well-known argument that in its essentials is practically identical with Newman’s, namely Hilary Putnam’s “model-theoretic argument” against the possibility that the terms of natural language or of our thoughts and percepts can have determinate referents in the mind-independent world.

This means that “what follows” is going to be a long haul! If anyone wants to read the whole paper in one fell swoop, it can be found here. Here at PoT, I will send it out in five installments, of which this post is the first. In this first installment, I begin with Putnam’s own warm-up exercise: his argument that a “brain in a vat” would be unable even to think that it was a brain in a vat. (To skip to the second installment, click here.)

Introduction

Hilary Putnam was a giant of 20th century philosophy. He began his career as a professor in 1951—earlier than I would have guessed—and in the first 25 years racked up a string of achievements (mostly in the last decade of this span) any one of which the average mortal would be pleased to consider the signature achievement of an entire career. Among these: hammering out the “functionalist” philosophy of mind that supplanted both behaviorism and the mental–state/brain–state “identity theory” to become the dominant account of what it is to be a mental state (a dominance which extends to the present day); promoting the revival of realism in the theory of universals; and most famously, pioneering the “causal the theory of reference,” a revolution in the theory of reference the consequences of which are still reverberating through philosophy. Even in comparatively minor efforts, his contributions were outstanding. An example that sticks with me personally is his authorship of the best single article I know on the implications of the special theory of relativity for the nature of time. It is true that he was not the sole originator of any of these advances, but he was in the van of all of them, and it is fair to say he was a major force for good in philosophy during this period.

Unfortunately, this period ended in the later 1970s, when he decided he had discovered an argument that decisively refuted what he called “metaphysical realism,” meaning basically the correspondence theory of truth; i.e., the theory that thoughts and sentences represent mind-independent reality and are true when and only when they represent it accurately. This discovery was announced in a pair of papers, “Realism and Reason” (1976, hereafter R&R) and “Models and Reality” (1977, hereafter M&R) and a semi-popular book, Reason, Truth, and History (1981, hereafter RT&H). As a result, he rejected “metaphysical realism” in favor of what he called “internal realism” or “internalism,” a view he associated with the “’Coherence theory of truth’; ‘Non-realism’; ‘Verificationism’; ‘Pluralism’; ‘Pragmatism’” (RT&H, 50). To be clear, this change represented an about-face for Putnam, who had heretofore been a staunch proponent of the realism he now stigmatized as “metaphysical.” The argument he thought he had discovered, usually called Putnam’s “model-theoretic argument,” drew on the standard model theory of formal logic. It claimed to show that well-attested theorems from the semantics of formal logic showed that a realist semantics was impossible for any language. Specifically, that no terms of any language (including mental states such as thoughts and percepts) could refer to any specific, determinate referents in the mind-independent world.

This argument got a great deal of attention and discussion because of Putnam’s personal fame and because of its seemingly rigorous, logical character. However, I think it would be fair to say that few philosophers were convinced. Certainly, the field did not drop realism in the wake of the model-theoretic argument and begin cultivating Putnam’s alternative. Indeed, Putnam himself later abandoned internalism, although he never exactly returned to metaphysical realism. In the last 25 years of his life, he seems to have shifted views every five years or so from one thing to another, and I have not tried to trace the developments. As far as I can see, the productive and influential period of Putnam’s career ended in the late 1970s, notwithstanding that he published approximately 47,000 books and articles after that time.

Just because an argument fails to prove what it claims to prove does not mean we can’t learn important things from analyzing it. In what follows, I will examine Putnam’s argument, as well as a very similar argument made fifty years before Putnam’s against Bertrand Russell’s structuralist philosophy as proposed in his The Analysis of Matter (1927). The earlier argument still has relevance and in fact has received considerable attention in the past few decades, as we shall see. The comparison between it and Putnam’s argument will prove instructive. I begin, as Putnam himself did in RT&H, with a preliminary argument concerning the traditional skeptical question: Could you be a brain in a vat?

Brains in a Vat

In philosophy, talk of “brains in a vat” refers to a scenario like that depicted in the movie The Matrix where people who seem to be living normal everyday lives, going to work, meeting with friends, etc. are really disembodied brains kept alive in vats of nutrients. All of their afferent and efferent nerves are connected to microelectrodes controlled by a computer program that stimulates them in exactly the way they would be stimulated if they were embodied and moving through their environment just as we do. Thus, the entirety of their experience is a virtual reality illusion. The illusion is so complete and perfect that they can’t tell it isn’t real. They have no information that would enable them to detect that they don’t have bodies and don’t move around in their environment. This is a sci-fi version of skeptical arguments along the lines of, “How do you know your whole life is not a dream?” It is normally meant to raise questions about the status of our supposed knowledge of “the external world,” the world outside our minds.

In RT&H, Putnam makes a famous argument about brains in a vat to the effect that a brain in a vat (BIV) would be unable to say or think that it is a BIV, even though it was in fact a BIV and indeed just because it is a BIV. Oddly, he also claims (8, 15, 50–51) that one’s statement or thought that one is a BIV “is not true” and even that “it is not possible,” because if one was a BIV, one couldn’t say or think it. This strikes me as a flat error—especially if it is supposed to provide some reason to assure one that one isn’t a BIV—for reasons that will become plain soon.

Putnam’s argument, as first presented, is simple. A machine that could pass the Turing test—today we’d say ChatGPT—nevertheless does not understand the words by which it communicates. It merely manipulates symbols in accordance with a set of rules encoded in a program. We interpret its outputs in a meaningful way, but that’s just us. In themselves, the outputs have no meaning, and as evidence Putnam cites their disconnectedness from the world—ChatGPT lacks our sensory inputs and motor outputs. To ChatGPT, “tree” can’t mean tree because it has never seen a tree or interacted with trees in any way whatsoever. Moreover, if all the trees in the world were to vanish, it would make no difference to ChatGPT. Indeed, the whole world could disappear and ChatGPT would continue as always, as long as it was physically able. And all this despite its executing its program so well that its human interlocutor can’t tell it’s not human. Indeed, you could have one ChatGPT talk to another ChatGPT, and the two could “fool” each other into thinking each was a human being forever, and no external world conditions would make any difference.

Now, all this being so, we have next to observe that a BIV is in just the same situation as ChatGPT. Of course, a BIV has a human brain that evolved by causal interaction with the external environment, and its afferent and efferent nerves have the function, shaped by evolution, of mediating such interactions by producing and manipulating representations of features of that environment. And one might well argue that the said representations have representational status and representational content by virtue of their functions and of that evolutionary history. For example, binocular disparity computations in the visual system might have the function of representing distance from the perceiver. If the nervous impulses that feed these computations are artificial, then the function fails to be performed successfully. For, the resulting representations of distance are inaccurate. But this doesn’t mean the function isn’t there or isn’t performed. Thus, the system would still represent distance from the viewer, only it would be inaccurate. However, Putnam considers no such possibility. He acknowledges that the BIV might have intelligence and consciousness but denies that these make any difference. The question, he insists (12), is whether the words or thoughts of the BIV can refer to any external objects. He answers no on the grounds that the BIV is just like ChatGPT: causally disconnected from external reality.

For the moment, let us suppose that Putnam is right that no mental states of a BIV can refer to external objects. It follows that a BIV, just because it is a BIV, cannot think about external objects. And these external objects include vats, electrodes stimulating brains, computers running programs, and everything else needed to conceive of being a BIV! Thus, if one were a BIV, one wouldn’t be able to think so. Therefore, if one can ask whether one is a BIV, one can thereby rest assured that the answer is no. The supposition that one is a BIV is self-refuting in the same way the suppositions that all statements are false and I do not exist are self-refuting (7–8; 50–51).

But does Putnam mean to be saying that the self-refutation of the supposition that one is a BIV is subjectively accessible to the individual? That is, that a BIV cannot even seem to itself to formulate the question of whether it is a BIV? That a BIV cannot think about external objects and is aware of this limitation? No:

although the people in that possible world [sc., where everyone is a BIV] can think and ‘say’ any words we can think and say, they cannot (I claim) refer to what we can refer to. In particular, they cannot think or say that they are brains in a vat (even by thinking ‘we are brains in a vat’). (8) [Note: Putnam made plentiful use of italics in all his writing. It is tedious to repeat “original italics” after every quotation. Throughout this essay, italics are original in every quotation from any author.]

Thus, Putnam’s argument is not much comfort after all. A person who wonders, “Do I exist?” has thereby generated enough to demonstrate an affirmative answer: “If I am around to ask the question, then I exist!” But a person who wonders, “Am I a BIV?” has done no such thing. Putnam allows a BIV to be physically possible. So, when one wonders, “Am I a BIV?”, there are two possible answers, not just one. One answer is that one is not a BIV but is genuinely considering the question. The other is that one is a BIV who is delusionally considering what seems to be the same question but really isn’t, since one’s thoughts don’t really refer to vats, etc. Nothing in Putnam’s argument rules out this second case. So, since which case one is in is not subjectively accessible, Putnam’s argument is really no help at all. Even if Putnam is right, you can’t know by means of his argument that you are not a BIV. Indeed, as should be clear by now, even if Putnam is right—that is, that his argument is correct that the thoughts of a BIV cannot refer to external things—his argument does not show that it is impossible that one is a BIV. Rather, it only shows that if one is a BIV, one can’t think it—despite the fact that one thinks one can think it. (This is the “flat error” I mentioned earlier.)

Is Putnam right? It hardly seems likely. Let us focus on Putnam’s claim that the BIV does not ask whether it is a BIV despite having the same subjectively accessible experience as a normal human being who asks, “Am I a BIV?” In terms of what is subjectively accessible, the BIV and the normal person might be identically situated. This means that the BIV cannot tell whether its thoughts of “vats” are about external, physical vats or not. But this is deeply weird, right? The BIV is sitting in its vat idly wondering whether it could be a BIV, and also whether to go to Panda Express again for lunch today, and many other things—and these thoughts of the BIV are subjectively indistinguishable from one’s own—but Putnam says no, it’s not thinking about being a BIV or eating lunch at all. This must mean that the contents of its own thoughts are totally inaccessible to it. The “content” it is aware of when having these thoughts, which for all the world seems to be about physical objects and activities like eating lunch, for all it knows might not exist qua content! Or might be radically different from what it seems to suppose—except that it can’t suppose it is thinking about anything for sure, since that would require knowing the content of its thoughts! And what goes for the BIV goes for us, too. Putnam’s argument has the consequence that it must be possible that we have an experience phenomenologically identical to that of a person who is thinking about lunch, even though we are not thinking about lunch at all—and more, that it does not even seem to us that we are thinking about lunch, since that would require us to be able to frame a proposition about lunch, which the argument says we can’t do. And all this, mind you, in a normal, waking, unaltered state of mind. That cannot be right. I think we can safely take this consequence as a reductio of Putnam’s argument.

What was the argument again? It was that mental representation depends on causal relatedness to the representatum, and a BIV is causally isolated from the world of external things.

For there is no connection between the word “tree” as used by these brains [in vats] and actual trees. They would still use the word “tree” just as they do, think just the thoughts they do, have just the images they have, even if there were no actual trees. Their images, words, etc., are qualitatively identical with images, words, etc., which do represent trees in our world; but we have already seen … that qualitative similarity to something which represents an object … does not make a thing a representation all by itself. In short, the brains in a vat are not thinking about real trees when they think “there is a tree in front of me” because there is nothing by virtue of which their thought “tree” represents actual trees. (12–13)

This argument is much too hasty. There is no connection between the word “hobbit” and actual hobbits, but that doesn’t stop us from thinking about them. Our minds have powers of representation that don’t depend in any direct way on causal connections to their representata. And our powers of representation would be shared by a BIV. So, just as we can wonder if we are a BIV, so a BIV, supposing it somehow knows that it’s a BIV, can wonder whether it is a normal embodied human. It might say, “I know that all my percepts, as of a tree or a car or my own body, are systematic illusions. What if my percept as of a tree ten feet in front of me was veridical and there was in fact a tree ten feet in front of me just as it appears?” By the same token, a BIV that thought it was embodied might say, “Maybe I’m a BIV. Of course, if I were a BIV, then all my percepts would be systematic illusions. Still, they would be illusions as of trees and bodies and so forth, including vats and computers and everything else needed to enable me to conceive of being a BIV. It is that (hopefully imaginary!) world I’m contemplating.”

Note that the scenario I just described, in which a BIV asks whether it is a BIV, depends on the intentionality of percepts. (Intentionality is the property of being about something. All instances of meaning and reference are instances of intentionality. I will discuss a bit more about intentionality and its importance in what follows, but detailed discussion of the philosophical issues surrounding intentionality would take this paper too far afield. For an influential treatment of the intentionality of thought and perception, see Harman 1990. For a brief explainer on the concept of intentionality in philosophy, see Byrne 2006.) That is, it depends on percepts being inherently about something beyond themselves. This is a point that Putnam seems not to recognize. Putnam treats the subjectively accessible elements of perception as images, qualities, sensations complete unto themselves. He often refers to them as “mental signs.” This is a profound mistake. In fact, all percepts are inherently about features of the external environment; that is part of what it is to be a percept. In particular, as I mentioned earlier, our brains were shaped by millions of years of evolution to represent our local environment from sensory stimulation. Even a BIV, however cut off from the world we contrive it to be, still inherits that structure and functionality. Inherent to the “BIV” thought experiment is that a BIV’s afferent and efferent nerves are connected to sources of electrical stimulation that exactly mimic the stimulation they would have in a live human body. Therefore, the BIV’s representational system of percepts and beliefs operates normally to generate representations as of its local environment. This is all a grand illusion, of course. The whole point of the thought experiment, in the hands of nearly all philosophers other than Putnam, is that the BIV is massively deceived. But “deception” presupposes that the BIV represents whatever it is deceived about.

My interpretation of what led Putnam to his “internal realism” face-plant is that he got carried away with the success of the causal theory of reference, mentioned earlier, of which he was a major author in the years immediately preceding his plunge into the abyss. The causal theory of reference was the most stunning development in philosophy of the 1970s, but it is not the subject of this essay. For our purposes, it is sufficient to say that according to the causal theory of reference, the reference of at least some terms of natural language and of at least some of our thoughts and percepts depends on causal–contextual connections between the terms or thoughts and the objects that are represented, not on any descriptions of those objects by which we might attempt to isolate or define them.

For example, consider the concept of water. You could say that water is a clear, odorless, tasteless liquid necessary to life and/or that it is the liquid that falls from the clouds as rain and forms streams, lakes, and seas. But according to the causal theory of reference, descriptions of this kind will never do to determine the reference of “water,” because we think of water as a certain substance with an essential nature that is not necessarily captured by accidental properties like falling from clouds as rain or even being an odorless, tasteless liquid. In Putnam’s own famous example, we can imagine a Twin Earth orbiting a distant star that is mostly identical to our own Earth except that what fills its lakes and oceans is XYZ instead of H2O. We have a strong intuition that Twin Earthlings would mean something different by “water” from what we do, because their “water” is chemically different—it is different stuff, although this is a fact that could be known only by advances in science that have been made in the past few hundred years. Prior to that, the mental representations of “water” by the Twin Earthlings and ourselves were identical. It follows that our mental representations are insufficient to determine the reference of a term like “water.” We need a causal–contextual connection to the substance (XYZ or H2O) as well. Putnam sloganized this point with the refrain, “meanings just ain’t in the head!”

Pursued radically enough, this doctrine might be taken to entail that our mental representations refer to what normally, systematically causes them, whatever that might turn out to be. Just as “water” could be XYZ or H2O, depending on which substance normally causes our water-experiences, so “trees” could be any of many quite different things, depending on what normally causes our tree-experiences. If for a BIV, what causes tree-experiences is a certain pattern of electronic impulses or a certain sequence of code in the program that controls the delivery of those impulses, then “tree” in BIV-English would refer to that pattern of impulses or sequence of code. And Putnam entertains these possibilities (14–15). He also explicitly justifies the claim that a BIV cannot refer to external objects by appeal to the causal theory of reference, citing the XYZ–H2O example (22–25, 27–29).

I would be among the last to suggest that the world must be just as it appears to our senses. As an obvious example, the reality our senses detect in the form of colors, tastes, and sounds is in my view nothing like intrinsic color, taste, or sound qualities. A collection of NaCl molecules tastes salty because of the chemical reactions by which the molecules stimulate our taste receptors, not because NaCl molecules have the intrinsic quality of saltiness. Again, science teaches us that a “solid” oak table is mostly empty space, because the atoms of which the table is composed, like all atoms, are almost entirely empty. (The ratio of the radius of a hydrogen atom to the radius of its proton is 62,976.19 to 1. To visualize that, if the proton of a hydrogen atom were a 3-inch grapefruit at the center of Levi’s (football) Stadium in Santa Clara, California, the electron in the lowest orbital shell would be beyond the parking lot.) But that’s not the way it appears to our senses! These examples illustrate the familiar contrast between the “manifest image” of sense-experience and the “scientific image” of external reality.

But we mustn’t get carried away here. Recall my remarks above about the evolved perceptual faculties of the brain. Over millions of years of causal interaction with certain features of the physical environment, our brains evolved the capacity to compute stable representations of those features. I mean such features as shapes of physical objects, their positions, orientations, distances, surface textures, the ambient lighting conditions, and so forth. Our brains’ ability to do this with tolerable accuracy is critical to our survival and reproduction. It is the detection of features of physical reality that drives this evolutionary process, regardless of the form in which those features are presented to consciousness. Instances of these features and the objects that possess them are the referents of our percepts (and thence of our perceptual beliefs) when perception works properly. Even when it doesn’t work properly, the universal features represented are those that our perceptual faculties evolved to represent. The drunk suffering from delirium tremens who “sees” pink rats sees them as solid objects arrayed in the layout of space, and so does the BIV that seems to see a tree before it. Furthermore, it makes little sense to say that the BIV is “actually seeing electronic impulses” any more than to say that the drunk is seeing the alcohol, except as a colorful expression. In both cases, it seems best to say that they seem to see illusory objects as a result of dysfunctional causes; they both are experiencing illusion or hallucination.

To illustrate these points, consider an analogy with speech recognition software such as most personal computing devices today are equipped with. Automatic speech recognition is a difficult problem that has been an object of commercial and academic AI research for over eight decades and has only come to match human performance in the past five years or so. A wide array of technologies has been employed. Abstracting from the differences among these, an automatic speech recognition system must isolate and identify objective abstract features—words of a target language such as English—from a noisy acoustic waveform, despite variations in pronunciation, speed, pitch, tone, accent, and articulation. To perform this task is not a simple matter of pattern matching, but an active computational process that employs knowledge of the target language and of statistical properties of its various elements (phonemes, words, phrases, etc.) to compute complex relationships across temporal elements of the waveform. The important point is that the structures and processes of this computational process can’t be understood or explained without reference to the words—which are abstract linguistic types—and even grammar of the target language. That is, the operation of the system is only intelligible by reference to the words it functions to detect, because it is functionally organized around detecting them. It is therefore nonsensical to say that such a system merely happens to identify words of the target language and that if its inputs were “stimulated” by a non-speech signal, its output would represent something else or nothing at all.

For example, discussing the BIV scenario, Putnam writes:

let us specify that the automatic machinery [that stimulates the afferent and efferent neurons of the BIV] is supposed to have come into existence by some kind of cosmic chance or coincidence (or, perhaps, to have always existed). (12)

He then claims that the BIV’s use of “tree” would not refer to trees, due to the lack of causal connection between the BIV and any trees. Applying this sort of reasoning to our automatic speech recognition system for English, we may suppose that it is fed input “by some kind of cosmic chance” that just happens to be like acoustic waveform input produced by somebody speaking English. It seems clear that its output would still consist in an identification of the relevant abstract types. Likewise, if we imagine a Twin Earth where waveforms structured like English speech had some utterly different identity—otherwise meaningless courtship song of a nonlinguistic bird species, perhaps—the outputs generated by our system from such waveforms would still consist of strings of the relevant abstract types. To be sure, the signal being processed would not consist of English words, but the types identified—the structural properties—would be identical with English words. That is, the content represented would still be type-identical with English words (even though the same type-identical content might also be realized concretely by the courtship song of certain bird species, as well as many other things).

Now let us apply these observations to the human visual system. There are four points to make. First, like the output of an automatic speech recognition system, our percepts are the output of elaborate computational processes that are only intelligible by reference to abstract features of the external environment that they identify and track. The features in question are such as I have named: volumetric shape, size, distance, lighting conditions, etc. Thus, these abstract properties are at least part of the content of visual percepts, and so they set accuracy conditions for percepts: if the stimuli that produce percepts as of these external features possess the features, then the percepts are veridical, otherwise not. But in no case are they percepts as of different features.

Second, the objective properties of the distal environment that the visual system works to detect in the way described are abstract, structural properties, not intrinsic properties. For example, most of the perceived properties I have named as examples are spatial (although there are many others), but they are not properties of the intrinsic character of space. Rather, they are “structural” (think “mathematical” or “causation-relevant”) properties of space. Analogously, the automatic speech recognition system identifies abstract property types identical with those of English words even if they are not English words because they are the result of “some kind of cosmic chance.” This point has profound implications which are not the topic of this paper. I will mention a bit more about it presently.

Third, we must keep in mind the distinction between particular and property content of percepts. All sense-perception is a causal interaction with particulars. Context determines which, if any, particulars are perceived (for example, which of two identical twins is seen in the distance). However, a percept can’t attribute as property content just whatever properties the perceived particular happens to have, but only those properties it is capable of detecting. For example, the human visual system can detect shape and distance but not electric charge. Applying this point to Putnam’s example of water and Twin Earth, Earthlings and Twin Earthlings perceive different particular samples of “water.” The different samples have importantly different properties: the Earth sample has the property being H2O and the Twin Earth sample has the property being XYZ. These properties are not perceptually attributed to the water samples for either Earthlings or Twin Earthlings. These properties are aspects of the “meaning” of “water”—of what “water” signifies—that aren’t in the head. However, other properties of the samples, such as being clear and colorless and forming ripples on their surfaces, are perceptually attributed. These aspects of meaning are in the head (although it took millions of years of causal interaction with the environment by our ancestors to get them there).

Fourth, note that body is a perceptual property attributive of the visual system (for discussion, see Burge 2022, 91–99 and passim). That is, the visual system identifies and tracks bodies as such. I mention this to drive home the point once again that a spatial layout as of solid objects bathed in light from certain sources is a central component of nonconceptual percept-content of visual perception. This is as true of a BIV as of a normal embodied human.

On the other hand, there is an important disanalogy with the automatic speech recognition system, which is that it does not (as far as we know) have conscious awareness of its outputs, whereas human beings do have conscious awareness of at least many aspects of many percepts. Related to this, as I mentioned earlier, human perceptual awareness has intentionality. That is, we are not aware of our percepts as such, usually, but only of what they are percepts of. Human perceptual awareness is awareness of the external objects and properties that our percepts are about. Whereas it would seem that the outputs of an automatic speech recognition system are not inherently about anything. We of course interpret them as representing words in a stream of speech. But that’s just us. They do not seem to have intentionality in their own right. However, ours do. This means that the objects of visual awareness, including for a BIV, are presented as external, mind-independent objects in physical space. In the case of the BIV, obviously the contents of its visual percepts are not veridical. But the point is that the BIV’s visual percepts have such contents. This is why, pace Putnam, a BIV can ask whether it is a BIV. Putnam misses the intentionality of percepts. He speaks repeatedly of mental states such as percepts considered in themselves, without the external relations that might fix their referents, as “bracketed” (borrowing from Husserl) and “notional” (borrowing from Dennett) (RT&H 28–29). I have mentioned previously his referring to them as “mental signs.” These expressions reflect his attitude—his unargued assumption—that mental states can’t have intentionality unless it is supplied by some reductive means such as similarity relations or causal relations. On the contrary, as I have said, in my view mental states such as thoughts and percepts simply are intentional states. I think this is a fact that must be accepted whether we can explain it or not. (On this point, besides Harman 1990 referenced previously, see also Chalmers 2004 and Burge 2022, 30–36.)

I promised to say something more about the fact that the properties of external things that we are capable of perceiving are merely causal or structural. In my view, this is a direct consequence of the fact that we can know the world outside our own minds only by causal interaction with it (Potts 2011; 2022). In connection with the BIV scenario, a pertinent implication of the fact that we can perceive only the causal and structural properties of the world is that it allows for the world to be intrinsically quite different from how we conceive it to be without our percepts being unveridical. For instance, I mentioned above that an apparently solid table is said by science to be mostly empty space. But these are not incompatible descriptions if we bear in mind that perceptual solidity is merely structural, not intrinsic. The perceptual “solidity” of the table refers to its volume, weight, impenetrability by ordinary macro objects such as dinner plates, and so on. Structural facts of this kind remain true regardless of what the underlying quantum reality turns out to be—and regardless of what the still deeper reality underlying the quantum turns out to be, assuming there is one. There is now speculation that spacetime itself might be emergent rather than fundamental (Becker 2022). This would not mean that the structural, causation-relevant spatial relations that the visual system attributes to things aren’t real or that visual percepts as of these spatial relations are unveridical.

Reflections of this kind may serve to defang the BIV scenario, which seems to presuppose that we have a right to assume that the world is intrinsically as it appears to us. We don’t. Of course, the BIV scenario differs from the contrast between manifest image and scientific image, since it supposes that we have been removed from the world in which we evolved. This is my ground for denying that we should say that the BIV perceives cortical electrode stimuli or parts of a computer program: the perceptual context is abnormal. But this is not a very profound reason, and the BIV scenario could easily be modified to avoid it. In which case, I would accept it as showing that the external world could turn out to be much weirder than we usually imagine. But that wouldn’t mean we couldn’t think we are a (manifest) brain in a (manifest) vat!

To conclude, the main point is the following. Our percepts are generated by perceptual systems that evolved to detect and represent certain sorts of features and objects in our environment. Where the features in question are basic ones such as I named above (shape, size, etc.), it is constitutive of the relevant perceptual systems that they represent those features. The percepts generated by such systems are not “signs” that represent whatever normally causes them in whatever environment they might happen to be, but representations as of certain mind-independent properties and relations. It may well be that our concept of water refers to something quite different from what it would refer to if we were Twin Earthlings. But it is a big mistake to generalize from that to the conclusion that our percepts as of solid objects or as of distance or other spatial relations, for example, would refer to something radically different after we were transplanted to a radically different environment (such as out of our bodies and into vats of chemicals). Most perceptual illusions are permanent. A fish out of water is not therefore a land animal.

References

  • Becker, Adam. 2022. “The Origins of Space and Time.” Scientific American, 326 (Feb. 2022): 26–33.
  • Burge, Tyler. 2022. Perception: First Form of Mind. Oxford U.P.
  • Byrne, Alex. 2006. “Intentionality.” In Sahotra Sarkar and Jessica Pfeifer (eds.), The Philosophy of Science: An Encyclopedia, Routledge, 2006: 405–410.
  • Chalmers, David J. 2004. “The Representational Character of Experience.” In The Character of Consciousness, Oxford U.P., 2010: 339–379.
  • Harman, Gilbert. 1990. “The Intrinsic Quality of Experience.” Philosophical Perspectives, 4: 31–52.
  • Potts, David. 2011. Theories of Experiential Awareness. Ph.D. Thesis. University of Illinois at Chicago.
  • ———. 2022. “The End of History (for Physics)?” Policy of Truth. https://irfankhawajaphilosopher.com/2022/02/17/the-end-of-history-for-physics/
  • Putnam, Hilary. “Realism and Reason.” 1976. In Meaning and the Moral Sciences, Routledge & Kegan Paul, 1978: 123–138.
  • ———. 1977. “Models and Reality.” In Realism and Reason: Philosophical Papers, Vol. 3, Cambridge U.P., 1983: 1–25.
  • ———. 1981. Reason, Truth, and History, Cambridge U.P.
  • Russell, Bertrand. 1927. The Analysis of Matter. Allen and Unwin.

12 thoughts on “Could Hilary Putnam Have Been a Brain in a Vat?: Three Arguments against Reference, Part 1

  1. I have an extremely flat-footed observation about the initial characterization of what structural realism is: on the face of it, structural features of the world are intrinsic features of the world (or features that are part of how those parts of the world that I have sensory or perceptual access to really or intrinsically are). I suspect that, by ‘intrinsic features’ something else is meant, something like: the features things would have if they really were the way that they appeared (i.e., being red or solid or the like). But if this is what is meant, then standard direct realism (plus a theory of perceptual forms or the like) seems plausible: what we sense and perceive is the way the world really is, just by way of our specific forms of awareness (David Kelly’s position). I take it this is not a form of structural realism? I wonder, then, if this is true: once we get clear on or disambiguate expressions like ‘intrinsic features’ and ‘object of sensory/perceptual awareness’, structural realism and direct realism (plus perceptual forms or the like) come to notational variants of one another. I’ll be reading through with these points in mind.

    Like

    • Hi Michael,

      Yes, you could say that structural realism implies that the features of the environment that we perceive are perceived in a form (namely, an intrinsic character) supplied by us. The colors and sounds as we see and hear them are supplied by our own consciousness. It has to be so, because these properties don’t have causal powers. Evidence of this is that these properties play no role in the physical explanation of how we “perceive” them. Instead, that role is played entirely by electromagnetic waves, stimulation of photosensitive receptors, sound waves, wiggling of the eardrums, etc. And you could say that when we see, say, that a ball is red, we are perceiving the spectral reflectance of its surface (a physical property) and the red quality is the “form” in which we perceive it (Kelley’s language) or its Fregean “mode of presentation” or something like that. That’s not the way I would put it, but the difference between that way of putting it and what I think is correct makes little difference most of the time.

      Locke has a doctrine that is not too different from this, when he says (Essay II.xxxii.14-15) that ideas like blue and yellow are probably not the same as the powers in objects that produce those ideas in us, but we can still regard the ideas as veridical perceptions, as long as they are reliable indicators of those powers in the objects. Blue is the idea (“form” in Kelley-speak) by (in) which we perceive the relevant power in the surface of the object. This is consistent with SR, and it could be helpful to think of it that way.

      By the way, I consider myself to be a direct realist! Maybe we should take that as a further indication that there’s no incompatibility between SR and a view like Kelley’s. I only read Kelley’s book for the first time about 12 years ago. I wasn’t favorably impressed, and I haven’t thought about it much since then. Maybe I should revisit.

      Of course, structural realism is not a theory of perception. That is why it can be compatible with Kelley’s view of perception, Locke’s view, and mine. What SR says is that we’re never going to know anything about the intrinsic character of things in the external environment. We must be content to learn its structure and dynamics.

      One point that should be noted: properties like spectral reflectances or Locke’s powers are not intrinsic properties. They are causal. And so we can learn something about them, namely how they can affect us.

      One last point. I have the pleasure of teaching Ancient Philosophy this semester. We’ve just finished the Pythagoreans, and it occurs to me that they are structural realists of a kind and that maybe their doctrine can help to illustrate what SR is saying. Famously, they were impressed by how the numerical ratios of the lengths of the strings of an instrument produce the harmonic intervals of music (1:2 is the octave, 2:3 is the fifth, 3:4 is the fourth), and they concluded that this numerical structure is the determiner of musical harmony. That is, what matters is structure, not substance—not what the strings are made of or how long they are or even whether you have a stringed or wind instrument. And they took this so far as to say that everything is numbers. I presume they didn’t mean literally that there is no such thing as substance or intrinsic character. If so, then this is like structural realism. SR also is (sort of) saying “everything is numbers.” The math part is mainly what we can learn about nature.

      Like

      • Okay, David, I finally got back to this! Read through all five parts. Ambitious and impressive! I’ll keep the focus here on my initial comment and your response.

        First, in my initial comment, I confused realism about what we can know (perceptually or otherwise) with realism about the objects of perception (or what we are aware of in perception). Though these two questions/issues are related, forget about the latter (and David Kelly’s realist theory of perception).

        Second, though it is obvious enough that a red surface is not intrinsically red (i.e., red in itself, independently of the process of perceiving it), the surface might be intrinsically something else (whatever is meant by ‘intrinsically’). A kind of scientific realism (about what we can know, about the true objects of knowledge) might say this: the surface has such-and-such, atomic structural features that provide it with causal powers to affect our perceptual systems as it does. Your view, I think, is that what we perceive and know here are the causal powers themselves (and that these count as non-intrinsic, structural features of the object).

        Generalizing a bit, this suggests that: structural features are features that are a function of the existence, but not the particular identity, of more-basic features, the intrinsic features. But I’ll continue to speak of the structural features that are causal powers.

        I’m inclined to side with the version of scientific realism that I just sketched, not structural realism. This is mainly because what the relevant sort of scientific theory posits are precisely the underlying (intrinsic) features that explain the effects on our perceptual system (something about their identity, not just their existence or some highly-abstract relationships between them). Perhaps, independently of relevant scientific theories of the atomic structure of surfaces and how vision works, we know little or nothing about what these features are (I’m not sure, but we certainly know little about their deep nature or structure independently of the process and results of relevant science).

        One could run the model-theoretic argument on the scientific theories themselves. Plausibly, this would make them unable to make their intrinsic-character posits. But evaluating this move is beyond my pay grade.

        Like

  2. Pingback: Hilary Putnam’s Model-Theoretic Argument for “Internal Realism”: Three Arguments against Reference, Part 2 | Policy of Truth

  3. Hi Michael,

    I’m sorry it’s taken this long to reply to your comment. (It’s been a busy week.)

    I think you might be responding to the peculiarity of saying that what we perceive are “causal powers themselves.” I agree that that is peculiar, and I wouldn’t want to put it that way (although I realize it might sound like that’s just what I’m saying). When I see a coffee cup or hear a cat meow, I wouldn’t say I’m seeing the cup’s power of reflecting light or the cat’s power of causing air waves. Rather, I see the cup itself; I hear the cat’s activity. But I see and hear them by means of the light and sound waves. Moreover, what I can learn about the cup and the meowing are the things they can do, not what they are like in themselves.

    Does that help? We do perceive particulars and the particular instances of their properties such as surface spectral reflectances and vocal chord vibrations. However, we can learn about these things only insofar as they are capable of affecting us.

    On the question of “the underlying (intrinsic) features that explain the effects on our perceptual system,” I don’t think science does find underlying intrinsic features that explain causal powers. Think about a fundamental causal power in modern science such as electric charge. Electric charge is fundamental in that no deeper, underlying nature of any kind is posited to explain it. It is taken as primitive. There is talk of electric fields, but the electric field is generated by the charges, not vice versa. They are part and parcel of each other. And the field is not intrinsic. It is also defined by its power to affect other things. Electric charge is a fundamental quantity that is defined in terms of its effects on things as specified in scientific laws, such as Coulomb’s Law and Gauss’s Law. That is all we know about charge! There is nothing underlying it. Charge is a dispositional property, a power.

    Of course, that could change. Perhaps one day there will be an even more fundamental theory that specifies a new set of primitives from which charge arises as an emergent property. As an example, we can speculate that one day there will be a theory of quantum gravity that will explain the gravity of general relativity as an emergent phenomenon from the behavior of gravitons. That’s entirely possible, and the new primitives will be underlying with respect to the old ones. But the new primitives will not be intrinsic. They will be a new, deeper level of dispositional properties.

    In other words, I think powers are fundamental. They do not arise from underlying, intrinsic properties. For philosophical argument showing this—as opposed to my assertions about science—see a short paper (just four pages) by Simon Blackburn: “Filling in Space“. He says, “science finds only dispositional properties, all the way down,” and the bulk of the paper is devoted to showing why it’s a mistake to suppose that there must be “categorical grounds” underlying all dispositions.

    Like

    • That’s helpful, David. Thanks.

      Maybe I’ll look at the Blackburn paper. The distinction between dispositions and their potential non-dispositional (or “categorical”) grounds makes more sense to me than the intrinsic/structural distinction (or at least more sense to me than using the words ‘intrinsic’ and ‘structural’).

      I think of science as, at least in important part, making posits about grounds for dispositions that are not, at least not in the first place, themselves dispositions. E.g., if the reflectance properties of a surface are what cause certain redness experiences in us (and are what we are aware of in having those experiences), the most natural way to think of these properties is that they are “ways that the object itself is” that explain their powers to affect things (including us, in the production of surface-color-experience in our nervous systems).

      The concept of electrical charge does seem to pick out nothing more than a dispositional property. And it seems perfectly coherent to think of this disposition as brute (as there not needing to be any further ground, let alone intrinsic-character ground, for why there is an electric charge dimension to particles). However, ‘particle’ and ‘location’ seem to mean (and refer to) categorical elements that explain some of the dispositions (e.g., why just these things are affected by the thing in this way). Are these concepts mistaken in aiming to refer to the way things really are, that which might explain various causal dispositions? Do they somehow only appear to seek such reference? Is this some kind of illusion embedded in our conceptual scheme (as some people think about free will, on standard “libertarian” views of what this is)? Or perhaps they count as “structural” in some broader sense that includes more than just dispositions (in which case my understanding of what “intrinsic” features are is too broad, including certain features that are really “structural” on the structural realist understanding)?

      Like

      • Hi Michael,

        However, ‘particle’ and ‘location’ seem to mean (and refer to) categorical elements that explain some of the dispositions (e.g., why just these things are affected by the thing in this way). Are these concepts mistaken in aiming to refer to the way things really are, that which might explain various causal dispositions?

        I think there is plenty of categorical/intrinsic stuff out there. Maybe this is cause for embarrassment, but I’m still wedded to the substance–attribute metaphysics laid down by Aristotle (and which is one of Kant’s categories). So, I’m happy to believe in things like particles that have properties like charge, in spacetime as a substantial entity of some sort, etc. Maybe the substance–attribute structure of reality is a conceptual artifact; for example, a cognitive illusion caused by the subject–predicate structure of natural language. Anyway, I’ve yet to see a reason to give it up, so I continue to think in substance–attribute terms. But I don’t think that substance as such does anything or that we will ever learn anything about it. It’s a logical or metaphysical requirement: there has to be a bearer of properties. It’s not something posited in a scientific theory to explain anything. Every explanatory posit in science seems to be dispositional. I can’t think of any exceptions other than space or spacetime (which are relational, not categorical).

        You should definitely look at Blackburn’s piece, if you’re at all interested in this. It’s an Analysis paper like Gettier’s. In just two pages, he will give you the logical reasons why it’s no use talking about categorical grounds of dispositions. When I was writing my dissertation, I had never heard of structural realism (and neither, apparently, had anyone on my committee). Blackburn’s paper was my only lifeline, the only evidence I had that I wasn’t crazy.

        Liked by 1 person

        • I’ll definitely take a look at Blackburn’s piece. But spatial location seems to causally explain stuff (e.g., why the characteristic effects of this kind of entity having that kind of property happen here not there). So some of the categorical stuff seems not to be just epiphenomenal, metaphysical fluff.

          Liked by 1 person

  4. Pingback: Thoughts on Complicity | Policy of Truth

Leave a comment