Brains, Computers, Metaphor, Synecdoche, and People

Aeon has an interesting piece by psychologist Robert Epstein on why the brain is not a computer. In one sense, this is just a truism. Computers are machines made by human beings, whereas brains are animal organs that have evolved over a very long period of time; computers are made of metal chips, brains aren’t; computers aren’t neurochemical, brains are; brains can do lots of things that computers can’t (yet, anyway); and so on. This truism, though, depends on a rather imprecise, colloquial sense of the word ‘computer.’ More strictly speaking, a computer is just any device that computes, that is, “performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information.”1 In this sense, many cognitive scientists believe that the brain is literally a computer. While it is of course not a ‘device’ designed by human beings, it nonetheless performs mathematical and logical operations and assembles, stores, correlates, and more generally processes information. Indeed, to many people, and not just cognitive scientists, it might seem that the truism is that the brain is a computer in this sense.

When Epstein says that the brain is not a computer, he means that it is not literally a computer; it does not literally process information, store memories, or retrieve information. To speak about the brain in this way is to speak metaphorically rather than literally; the brain doesn’t process information any more than your spouse is the light of your life, Bach speaks to your soul, or your grandmother knows in her heart that you are a good person. This is hardly a truism. As Epstein emphasizes, it flies in the face of the dominant view in cognitive science. Of course, scientists who maintain that the brain literally processes information and the like need not maintain that when you and your computer each calculate the number of years it will take you to pay back your student loans, you and your computer are doing exactly the same thing. They do maintain, however, that you and your computer are both engaged in literally the same kind of thing, namely calculating the number of years it will take you to pay back your loans, or, more generally, processing information and performing mathematical and logical operations. What’s more, the dominant view in cognitive science is not just that your brain and your computer do some of the same things, but that most of what the brain does is information processing, and in exactly the same sense of ‘information processing’ in which your computer processes information.

Epstein doesn’t simply hold that this language is metaphorical rather than literal. He also insists that it is hindering scientific progress in understanding the brain and human cognition generally. This claim is stronger than the view that the language is metaphorical, and doesn’t follow from it. After all, we might readily concede that the computer language is metaphorical, but argue that it is a useful model that yields genuine insight. Epstein disagrees.

To some extent his argument seems to rest on the idea that simply because the computer model is metaphorical, it’s therefore literally false and hence cannot give us genuine knowledge or understanding of cognition or the brain. But he also points to some blind alleys that he thinks the model has led to, such as the idea that we store representations of our experiences in the memory register in our brains and then retrieve them when we want to remember something; the related, but less evidently plausible, hypothesis that particular memories are stored in particular neurons (an idea that he calls “preposterous”); and fantasies about our eventually being able to achieve immortality and incredible power by downloading our minds to more sophisticated hardware. Mainly, though, his thought seems to be that the computer model just misdescribes what goes on in ordinary human cognition, and misdescribes it in ways that obscure a better understanding of what’s going on. To illustrate, he points to some alternatives developed by some of the minority of cognitive scientists who reject the standard view.

A few cognitive scientists – notably Anthony Chemero of the University of Cincinnati, the author of Radical Embodied Cognitive Science (2009) – now completely reject the view that the human brain works like a computer. The mainstream view is that we, like computers, make sense of the world by performing computations on mental representations of it, but Chemero and others describe another way of understanding intelligent behaviour – as a direct interaction between organisms and their world.

My favourite example of the dramatic difference between the IP perspective and what some now call the ‘anti-representational’ view of human functioning involves two different ways of explaining how a baseball player manages to catch a fly ball – beautifully explicated by Michael McBeath, now at Arizona State University, and his colleagues in a 1995 paper in Science. The IP perspective requires the player to formulate an estimate of various initial conditions of the ball’s flight – the force of the impact, the angle of the trajectory, that kind of thing – then to create and analyse an internal model of the path along which the ball will likely move, then to use that model to guide and adjust motor movements continuously in time in order to intercept the ball.

That is all well and good if we functioned as computers do, but McBeath and his colleagues gave a simpler account: to catch the ball, the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery (technically, in a ‘linear optical trajectory’). This might sound complicated, but it is actually incredibly simple, and completely free of computations, representations and algorithms.

Of course, as Epstein notes, it isn’t quite that simple; when the player keeps moving in a way that keeps the ball in a constant linear optical trajectory, there is an almost unimaginably complex series of things going on in the brain, the nervous system, the eye, and so on. What isn’t going on, he insists, is computation.

I am, of course, not a cognitive neuroscientist, and so I am not really in much of a position to assess Epstein’s views here. They seem quite plausible to me; more precisely, I think it is quite clear that much of the computational language is not literal, and it seems likely that in the long-term progress in cognitive science will involve the displacement of the computational model. What strikes me about his piece is something a bit different, and mainly revolves around what he says about metaphor.

Epstein makes it quite clear that he sees the goal of cognitive science as “a metaphor-free theory of intelligent human behavior.” The boundaries between literal and metaphoric language are pretty fuzzy, but even so, it’s not at all apparent to me that a metaphor-free theory is something we can aspire to even in principle. Very many of our concepts are, in origin, metaphors; in fact, ‘concept,’ ‘origin,’ and ‘metaphor’ are all originally metaphors (‘concept’ formed from the Latin concipere, literally ‘to grasp together,’ ‘origin’ from Latin origo, literally ‘a (concrete, physical) rising up,’ ‘metaphor’ from Greek metaphora, literally a ‘carrying across’). Even if we rightly regard these concepts as no longer metaphoric, their metaphoric origins pose problems for the view that a metaphor is just “a story we tell to make sense of something we don’t actually understand” and that genuine understanding dispenses with metaphors. Provided that we suppose that we have genuine understanding (however imperfect) via our concepts of, well, ‘concept,’ ‘origin,’ and ‘metaphor,’ then it seems as though we have to acknowledge that metaphors can get us to genuine understanding, even if the metaphors must cease to be metaphors in the process. In other words, it seems as though we don’t come to non-metaphoric understanding (if that is, in fact, what we do) by replacing metaphors with something else, but rather by transforming them. At the very least, the history of just about any concept one will find deployed in contemporary psychology includes a phase in which the concept was formed by metaphor or related processes like analogy.2 So I doubt whether metaphor can be so dispensable as Epstein suggests.

We can see this pretty clearly in Epstein’s own attempts to move toward a “metaphor free” account of intelligent human behavior.

As we navigate through the world, we are changed by a variety of experiences. Of special note are experiences of three types: (1) we observe what is happening around us (other people behaving, sounds of music, instructions directed at us, words on pages, images on screens); (2) we are exposed to the pairing of unimportant stimuli (such as sirens) with important stimuli (such as the appearance of police cars); (3) we are punished or rewarded for behaving in certain ways.

There are, of course, long ‘dead’ metaphors all over the place here, of more or less the same kind as ‘concept,’ ‘origin,’ and ‘metaphor’: ‘navigate,’ ‘type,’ ‘observe,’ ‘instructions.’ But the notions of punishment and reward seem like perfectly live metaphors to me. Perhaps to a trained psychologist, these words have lost any essential connection to the human acts of punishing and rewarding, but even if these aren’t fully live metaphors, they’ve got more life in them than the others. To the extent that punishment and reward are essential to Epstein’s understanding of human behavior, his understanding depends on metaphor.

The stranger thing, though, is that Epstein himself seems not to notice that at least some of the language of the computer model is metaphorical as it applies to computers. In describing an exercise in which he asks a person first to draw a dollar bill in as much detail as possible from memory and then to do the same thing while looking at a dollar bill, he says that the phrase ‘from memory’ is a metaphor — and though he doesn’t quite say so, he seems to imply that he takes this as a computer metaphor. But quite obviously the notion of computer memory is a metaphor drawn from the concept of human memory, not the other way around. It’s possible, of course, that what Epstein means here is simply that we tend to think of memory on the model of computer memory as though we were literally retrieving a physical item from a storage space. But whatever he thinks, one of the things that complicates his whole view is that many of the concepts we apply to computers were in fact originally drawn from concepts we applied to human cognition.

Human beings quite obviously process information and engage in mathematical and logical operations. Computers can do these things because human beings, who could already do these things, designed them to do them. This is not to say that computers engage in these operations and process information only metaphorically and not literally, and of course many other notions that we apply to computers are not derived from the concepts that we already applied to human cognition before inventing computers. But the very purpose of a computer is to simulate cognitive processes that human beings perform. This not only makes it entirely unsurprising that computers have become a dominant metaphor for cognition; it makes the computer model importantly disanalogous to the other metaphors that Epstein singles out as historically influential: ‘spirit’ (breath), hydraulics, machines, electricity, chemistry, the telegraph. Some of these weren’t invented at all, and none were invented for the purpose of performing the cognitive tasks that we could already perform. So if the computer model is really just a set of metaphors, it is a set of metaphors that were, in turn, created as metaphors on the basis of concepts already applied to cognition.

Just as importantly, these concepts — processing information, performing logical and mathematical operations, etc. — were not in the first instance attributed to brains, but to people. This, it seems to me, is where the real problem with the computer model of the brain lies. Despite the penchant of philosophers and scientists for what Maxwell Bennett and Peter Hacker call the ‘mereological fallacy’ — “ascribing to the constituents of parts of an animal attributes that logically apply only to the whole animal” — it is blindingly non-obvious that the brain, as such, is what engages in information processing or mathematical and logical operations when a human being sits down to, well, process some information or do some math or logic. In fact, however overwhelming the empirical evidence may be for the causal relations between brain activity and human cognition, it’s nothing close to clear that the brain as such, or any part of it, is properly taken as the subject of any distinctively mental predicates. Neither neurons, synapses, and the like nor any of the processes and events of which they are the primary subjects appears to display any intentionality or consciousness, the two features most widely thought to distinguish the mental from the non-mental. Neither does it make any obvious sense to attribute beliefs, desires, intentions, or thoughts to them. Pointing to the apparently ubiquitous causal connections between brain processes and mental states and processes provides no more warrant for claiming that the brain or its components believe, desire, intend, think, or perform mathematical/logical operations or other kinds of information processing than the ubiquitous causal connections between processes in the eye and visual perception provide warrant for claiming that the eyes see, or the ubiquitous causal connections between processes in the ears and auditory perception provide warrant for claiming that the ears hear.

In many contexts, such language is innocuous; there’s even a term for it in classical rhetoric: synecdoche. But what makes for a suitable rhetorical figure does not necessarily make for a suitable proposition of science or metaphysics. Though I have my misgivings about Bennett and Hacker’s Wittgensteinian approach to the philosophy of mind, they seem exactly right to insist that the concepts we routinely employ to understand human cognition apply to whole animals and not to their parts. It would be one thing if we were to uncover empirical evidence that our brain believes, desires, intends, and thinks in exactly the same sense that a whole human being believes, desires, intends, and thinks. But there is no such evidence, or if there is, nobody is pointing to it. Instead, there is a lot of extremely interesting evidence about what goes on when people believe, desire, intend, think, and so on, coupled with a slide from whole to part. Aristotle had already observed in a related context (De Anima 408b) that it is a mistake to infer that the soul “pities or learns or thinks” on the grounds that the human being does these things with or by means of the soul; Bennett and Hacker’s critique of the mereological fallacy in neuroscience is simply a more sophisticated and elaborated riff on a point that Aristotle made nearly 2400 years ago.

Of course, there is hardly a position that some philosopher has not defended, and some philosophers (such as Searle and Dennett, in their responses in the Bennett and Hacker volume) have defended the attribution of intentionality to the brain itself. I have no pretensions to being able to refute two such formidable philosophers in any forum, let alone a blog post. But it seems to me that the real issue here is not whether the computational model of the mind is metaphoric, but whether it is synecdochic. Epstein and Bennett/Hacker seem to present complementary arguments for rejecting the claim that the brain literally computes, processes information, or is the subject of other mental states, processes, and activities. Even supposing they’re right, the computational model may be scientifically useful despite being synecdochic rather than literal (indeed, usefulness seems to be all that Dennett ultimately wants to claim for it). In fact, it might be more useful, because to understand it as synecdochic is to understand it as engaging in a figure of speech that is not literally true.

Metaphors, of course, are also not literally true, but while they can eventually lose their metaphorical character without embodying errors, the boundary between literal and metaphoric speech is difficult to determine. Hardly anybody supposes that concepts involve literally grasping things together in the way that I might grasp several pens from my desk, but the idea that conceptual thought involves grasping a plurality of things as a unity would likely not strike most people as metaphoric; in some sense it is, but in some difficult to specify way it is obviously different from the claim that your husband is your rock. By contrast, a claim like “the brain processes information” is either synecdochic or it is not; either we are simply using a figure of speech to attribute to a part (the brain) what properly belongs to the whole (the animal) or we are claiming that the brain is the thing doing the information processing. Theoretical clarity may be best preserved by avoiding synecdoche altogether, and that, in contrast to avoiding all metaphor, seems like a viable option, at least in principle. But the more important theoretical question seems to be whether it is the brain that does these things, not whether it does them literally or metaphorically.

There is one more striking feature of Epstein’s piece that I can’t resist mentioning. Epstein seems to think that up to now all we’ve ever had for thinking about intelligent human behavior is a bunch of metaphors, stories that “we tell to make sense of something we don’t actually understand.” But Hackett and Bennett’s critique of computationalism seems more incisive than Epstein’s, and, as we’ve seen, Hackett and Bennett are more or less just rehashing Aristotle. In other words, Aristotle had already appreciated the fundamental problem, and he did not need any knowledge of the brain to do so. What this suggests to me is that unlike anything having to do with the brain, intelligent human behavior is not something that we can understand only with the aid of distinctively modern scientific knowledge. The point is not that modern scientific knowledge cannot or does not significantly expand and deepen our understanding of ourselves. Nor is it that Aristotle’s psychology is true or adequate, even leaving aside the obviously mistaken physiological dimension of it. It is, rather, that Aristotle (and plenty of other pre-modern philosophers whose theories of psychology are incompatible with his) already operated with concepts that we can still recognize as applying to ourselves in a way that none of the outmoded metaphorical models that Epstein discusses can.

Whether or not we accept Aristotle’s (or Plato’s, or the Stoics’, or Aquinas’, or Hume’s) views, we can recognize ourselves as subjects of desires, beliefs, emotions, intentions, and thoughts in a way that we cannot really recognize ourselves as spirit breathed into dust, as hydraulic systems of the four humors, as very complex mechanical automata, or as telegraphs. There is more self-understanding to be had even from intelligently rejecting the psychology of the Republic, the Nicomachean Ethics, or Augustine’s Confessions than there is from a similar rejection of the literal truth of Genesis or from discarded scientific and pseudo-scientific theories. There are, of course, plenty of philosophers and neuroscientists who will dismiss it all as mere ‘folk psychology’ and offer up a contingent historical narrative to explain why we can still take Aristotle’s or Hume’s accounts of human action seriously in a way that we can’t take the hydraulic theory seriously. But perhaps, for all the celebrated opacity of the self, it is simply easier to understand intelligent human behavior than the physiology of the brain.

1. http://www.thefreedictionary.com/computer
2. I have no sophisticated views on the relationship between metaphor and analogy, which seem to be treated quite differently by linguists, rhetoricians, and philosophers, and by different traditions in each discipline.

9 thoughts on “Brains, Computers, Metaphor, Synecdoche, and People

  1. “But perhaps, for all the celebrated opacity of the self, it is simply easier to understand intelligent human behavior than the physiology of the brain.” With all do respect I kind of disagree with this statement. Isn’t it redundant? Elaborate for me.

    Like

    • Well, here’s an example. Plato and Aristotle distinguished between two different kinds of desire or motivation: rational motivation based on what a person believes is good or best to do and non-rational appetite, which isn’t. For instance, if you’re hungry, you just want to eat, and that desire doesn’t stem from your belief that it would be good to eat. You might think that it wouldn’t be good to eat, but still be hungry. But, they think, we can also desire things just because we believe that they would be good; you might want to become a doctor, say, because you believe that it would be a really rewarding career. But then if you come to believe that it wouldn’t be a rewarding career, or not the most rewarding career, you can just stop desiring it. There are arguments to be had about whether these are really fundamentally different; Hume and other philosophers, for instance, would insist that any belief-based motivation is ultimately grounded in some desire that has no more to do with reasoning than hunger does. I think that’s wrong, and that Plato and Aristotle were right. But even if we think Hume was right instead, here’s the thing: Plato, Aristotle, and Hume knew virtually nothing about the brain and how it worked (Aristotle thought it had nothing to do with cognition and functioned to cool the body; Plato believed it had to do with cognition, but his reasons for that weren’t especially scientific; Hume knew that the brain is associated with cognition, but he had no idea about most of even the most basic stuff you’d find in a neuroscience textbook today). They knew nothing about how the brain worked, but they were able to identify an important psychological feature of human life (or, if you go with Hume, able to figure out that what looks like an important difference is actually not). All three of their works contain loads of astute observations and reflections about human behavior, but they knew nothing about the brain.

      Does that help clarify the kind of thing I have in mind?

      Liked by 1 person

  2. Yes, a little better. I will admit, Philosophy is so interesting, but yet difficult to me because I see things more black and white, either it is or it isn’t, Philosophy is all grey. I’m very open minded person but sometimes I get stuck on all the “philosophies” of things. It’s almost like taking a topic and beating it to death!!! Why go that deep? But I m learning to take the time and dig deeper. That’s the part I have trouble with. For example, this blog, when I read it I thought OK, so the physiology of the brain is one thing and the intellectual person is another. But then i thought one doesn’t work without the other. If we do not understand the foundation of a person we will not understand them at all. Maybe its me, i see people so differently, I read people really really well, very quickly. I can get along with anyone, intellectual or not. If a person is flat out lying and you know they are lying because they are explaining themselves way too much, I will question, maybe interrogate, them to prove that they are in fact lying. (I do all this in my head if I don’t know the person) Quite a few people in my last class were just so young and when they answered questions that professor asked, which was extremely rare, I would look back and think “what a stupid answer in my head” and ask myself “why did that person say that, what makes them say these things, or what makes people say things they say?” I even did that with my other teachers, not Dr. Khawaja!!! never!! but my other professors theories seemed questionable to me so I asked a lot of questions. If Hume, Aristotle and Plato didn’t know anything about how the brain works, then what did they really know about human behavior, or people for that matter? Were their finding just their theories that were passed on?

    Like

    • Well, sometimes philosophy is fairly black and white; it’s just that it’s almost always complex, especially if you do it responsibly and try to take account of the various plausible things that people have said or might say. It can also be difficult at times to see just what the point is supposed to be of drawing so many fine-grained distinctions and treating an issue in such depth; I suppose sometimes that’s because there isn’t much point, but usually there is and it’s just that philosophers aren’t very good at communicating it to people who aren’t already in the thick of it.

      Your reasoning here provides a good case, though, of how fine-grained distinctions can be important. Your idea that neurophysiology and intelligent human behavior go together — one doesn’t work without the other, as you put it — naturally suggests the thought that they’re therefore the same thing, and it also naturally suggests that if we want to understand the one we need to understand the other. You certainly aren’t the only person who has reasoned in that way, whether about the brain and intelligent human behavior or about many other things. But in fact I think neither of these things is true.

      Consider some different examples: your lungs aren’t going to work if your heart stops working, and your heart isn’t going to work if your lungs stop working; you need both of these organs to be functioning in order to survive. But your heart and your lungs are different organs. So we can’t infer that they’re the same thing from the fact that one doesn’t work without the other. So too, we might know plenty about one without knowing much about the other; you might master the chapter in your biology textbook about the heart without having read the one about the lungs yet. The same seems to hold true for other cases where two things are even more closely united. Your bones are composed mostly of collagen and calcium, which are in turn composed by certain combinations of protons and electrons. It makes rather more sense to say that your bones are collagen and calcium than it does to say that your heart is your lungs; certainly it seems that you can’t have bones without a certain configuration of collagen and calcium and that you can’t have that configuration of collagen and calcium without having bones. But a bit of reflection suggests that what it is for something to be a bone is not at all the same as what it is for something to be collagen or calcium or even a particular configuration of collagen and calcium. A bone is a kind of organ that supports and protects other organs, produces red and white blood cells, and enables an organism to move around and support itself; collagen and calcium are nothing of the sort, even though when combined in the right way they make up bones. So too, scientists knew a lot about bones before they knew anything about calcium and collagen, and you and I could learn a lot about bones without learning anything about their chemical composition; and of course we could also learn a lot about collagen and calcium without knowing anything about the protons and electrons that compose them. In general, we can come to know a whole lot about the functioning of things without knowing much at all about their material composition. Think about computers again: you know plenty about how they work, what they do, and so on; but how much do you really know about the material things that they’re made of?

      In more abstract terms, what these examples illustrate is that we cannot infer that x and y are identical from the fact that each is a necessary condition of the other (hearts and lungs) or from the fact that one of them composes the other (collagen & calcium and bones, protons & electrons and calcium), and we do not need to understand the necessary conditions or material composition of a thing in order to understand it at all. Of course, if we don’t understand the necessary conditions and the material composition of a thing, then we do not fully understand the thing. But we should not think that we can only understand something at all if we understand it fully. As always, there have been philosophers who deny these claims, because, well, for almost any idea you can come up with there is a clever philosopher out there who will defend it. But virtually every philosopher would agree that there’s at least somethingto these claims (which, coincidentally, were all already made quite plain by Aristotle, though he of course knew nothing about collagen and calcium, let alone protons and electrons).

      You also give a pretty good example yourself. You say you read people very well, can often tell when people are lying, and are good at understanding people. But what does any of that have to do with your knowledge of the brain? Little or nothing, it seems. So even if we need to understand the brain in order to understand human behavior fully, your own ability to understand a lot of it without any neuroscience shows that we don’t need to understand the brain in order to understanding human behavior at all.

      Did Plato, Aristotle, and Hume really know anything, or were their findings just theories? Well, I’m sure you don’t want me to go into painstaking detail about the different things we might mean by “knowledge” and “theory,” but let’s just say this: even if their theories are wrong, we can still find much insight in them because the concepts they use to try to understand human beings are still ones that we can see as applying to ourselves, so that if we discover that they are wrong, that’s not trivial, and in coming to see why they’re wrong we can learn something important about ourselves. By contrast, the theory that certain features of our mental lives are the products of certain balances or imbalances of the four humors in our bodies is wrong, but we do not learn anything important about ourselves by seeing why it’s wrong; there just aren’t any such things as humors, and the theory operates with a set of concepts that we cannot seriously regard as applying to ourselves. In general, past theories about human physiology aren’t of much value for us now if what we want to know about is human physiology; but (some) past theories about human psychology are still of value for us now if what we want to know about is human psychology, action, and motivation.

      I guess the best proof of this would just be to show you by having you take a class in which we read Plato and Aristotle but also read a bunch of Hippocratic medical authors talking about the brain. I’m pretty sure you’d see how the former remain relevant in ways that the latter simply don’t.

      Liked by 1 person

  3. Perhaps the relevant concept of INFORMATION refers to something like this: condition that-P of physical system A corresponding, for some period of time, to physical system B having condition or property that-Q (in which case the condition that-P of A “encodes” the information that-B-has-Q-at-t). In this sense, physical systems “register” and “encode” information about other physical systems all the time. But the processes here appear to be merely causal, not logical: that-P (or A-having-condition-P-at-t), whatever it is, would not need have any particular logical relationship to the proposition that-B-has-Q-at-t for the “registering” or “encoding” to occur or be the case. I think that cognitive scientists use the language of information, information registering, and information encoding in a very similar way. Though there is, to be sure, something special about how conscious animals register and encode information, it is fundamentally the same thing. Similarly for electronic computers, though, at least in general, we provide them with information that they then may “perform operations” on.

    What about information processing, though? I take it that some, but not all, learning is information processing – i.e,, processes that follow, or tend to follow, logical or mathematical rules that are truth-preserving or that likely lead to related further truths (at the abstract level of propositions and truth-values). I take it that we register, encode, and have access (often conscious access) stocks of information about all sorts of things around us. And there is nothing preventing one condition of a brain (or central nervous system or whole organism) encoding distinct bits of information about different things (this is just one thing having more than one function or functional role). The interesting hypothesis here, then, is not that all processes that result in learning are processes that do or tend to follow logical or mathematical rules, but rather that all sorts of non-conscious learning processes are like inferences – including inferences performed by computers – in that they do follow or tend to follow logical or mathematical rules.

    This hypothesis seems quite promising and I take it that there is quite a bit of good evidence for it. However, I think it is of limited use in explaining human thinking (or coming to understand it better by learning things about computers). The reason for this is that human thinking is motivated. Without motivation, we could not think – or direct our thinking to things that are salient to our topic of thought or important things to have opinions about. Good thinking, in humans, as much involves paying attention to these our “inklings” and “intuitions” as it does carefully following logical or mathematical rules starting from some given or on-going supply of “information registering” or “information encoding” starting points. On the other hand, if, say our visual system or our emotions come to encode certain information via a process akin to logical inference, this seems like an interesting and important thing to know about our visual systems or emotions.

    (Cutting somewhat against this hypothesis, though, I wonder about things like our tendency to trust reports of fact from other people. And our tendencies to make judgments about what other people are thinking and feeling (in some sense on the basis of things like body posture, facial expressions). These inference patterns are broadly reliable, but the relevant rules seems to be associative and causal, not logical. If this is right, then the scope of the hypothesis needs to be trimmed back somewhat. Some good thinking in humans is not good thinking due to a tendency to follow good logical or mathematical rules.)

    I guess these are reasons for thinking that there is at least some clarity and value in the sort of model that Epstein and you are attacking. I’m not so much specifically rebutting Epstein’s point about metaphor or your point about synecdoche – these are interesting and important points that may well weight against – as I am providing clarity and reasons for the other side of the debate. Though I’m also not convinced that, when carefully articulated hedged, the sort of hypothesis in question needs to commit one to the particular models under attack (like the model of human memory as “retrieving” stored images or beliefs – like the computers do!). Nor is it clear to me how analogous, in the end, human thinking, or good human thinking, is to computer-style logical inference, etc. Computers have neither appropriate cognitive motivation nor non-logical rules of reliable thinking as essential means to encoding additional bits of information.

    Like

    • A part of me is inclined to say that we really can’t seriously doubt whether the computational model has value; unless one is prepared to bet that future shifts in cognitive science will lead to the wholesale abandonment of the work that has been done on the computational model, then its value is right there to be found. But I don’t tend to think that the value of empirical science has much to do with whether it is literally true or cuts nature at its joints; in fact it seems rather more likely that very little of it does, and not because I endorse any general anti-realist theses, but because it seems pretty clear that theoretical success in empirical science — as assessed by what in fact gains acceptance — simply doesn’t depend on the theories’ literal truth or on their capturing the mind-independent structure of the things being studied. That said, since scientists and fans of science certainly like to act as though successful theories identify the literal, mind-independent truth of things, it’s worth asking in particular cases just how sensible it is to think so.

      The conception of ‘information’ that you lay out here seems more or less like what a lot of people have in mind. What baffles me about it is why anyone would think that it’s a remotely adequate conception of information. If we step back from this technical sense of the term, ‘information’ seems very much like a concept that implies intentional content and meaning. But as you point out, on the conception you’ve sketched, ‘information’ need involve no such thing; it can be brutely causal. In fact, as you characterize information, I’m not sure it even needs to involve causation, at least not of any direct sort; all it requires is that there be some sort of correspondence between condition P of A and condition Q of B, and while we would expect that any regular correspondence would have to admit of some causal explanation, that explanation need not posit any direct causal relationship between A and B. But even when it does involve a direct causal relationship, why on earth should we think that such relationships involve information? There is a regular correspondence between the position of the sun and the heat of my sidewalk, but unless we’re just stipulating that ‘information’ be used in this technical sense, it’s hardly sensible to talk of my sidewalk “encoding information” about the position of the sun. Of course, the regular correspondence between the two allows us to treat the heat of the sidewalk as providing us with some information about the position of the sun, but that is because we have the intellectual abilities to grasp the correspondence and draw inferences from one to the other. By contrast, when you ask me where I was born and I say, “Wheeling, West Virginia,” I am quite literally communicating information to you, because my words bear meaning and an intentional relationship toward things in the world. Even here, though, the brute sounds that I utter do not as such carry information; they have the meaning and intentionality they do only relative to the conventional system of signs that is the English language. Most generally put, it’s hard to see why anyone would think that information is a mind-independent feature of anything.

      Why not just stipulate that we’ll use ‘information’ in the technical way, though? Well, we can do that, but we risk equivocating systematically without noticing it, as I suspect is just what happens when cognitive scientists use the term this way. When you say “I take it that we register, encode, and have access (often conscious access) stocks of information about all sorts of things around us. And there is nothing preventing one condition of a brain (or central nervous system or whole organism) encoding distinct bits of information about different things (this is just one thing having more than one function or functional role),” you seem to be doing little more than equivocating on the ordinary sense of ‘information,’ to which intentionality and meaning (if not consciousness) are essential, and the technical sense, which is, as you note, equally applicable to brutely causal relations. You say there’s nothing preventing one condition of a brain, central nervous system, or organism from “encoding bits of information,” but if you intend to say that there’s nothing preventing it from doing so in precisely the same sense that “we register, encode, and have access to stocks of information about all sorts of things around us,” then there seems to be quite a lot to prevent it, namely that there is no clear reason to attribute intentionality to the brain or central nervous system solely on the basis of the causal relationships between processes and states of the brain/nervous system and processes of registering, encoding, and accessing information. Unless (per impossible, I should think) intentionality can be reduced to causation, there is no valid inference from causation to intentionality.

      Of course, you may not be equivocating at all; perhaps you mean that “we register, encode, and have access to” information simply in the technical sense that you laid out earlier. But if that’s what we’re talking about, then I fail to see how we’re talking about anything mental at all, again because nothing about ‘information’ so understood necessarily involves any intentionality or meaningful content. This seems to me to lead straight to the absurd suggestion, which scientists are now taking seriously, that plants perceive. This apparently striking claim turns out to mean nothing more than that plants react to their environment and adjust their behavior in particular ways. There is nothing perceptual about it at all, provided that ‘perception’ refers to what we and other animals do when we see, hear, touch, smell, and taste things. Perception in its ordinary sense involves intentionality (and, perhaps even more problematically, intensionality) and consciousness, and cannot be identified simply with behavioral responses to environmental factors. Perhaps perception is a useful metaphor in these cases (though I doubt it), and perhaps studying the kind of environment-sensitive behavior observable in plants can help us to understand the physiology of perception. But barring some demonstration that intentionality and consciousness either don’t exist (a dubiously coherent suggestion) or are pervasive (which even panpsychists seem disinclined to accept), to talk this way without adverting to its metaphorical character is either to equivocate or to attempt to define the mind out of existence. Neither seems like good science or good philosophy.

      Epstein takes for granted that attributions of ‘information processing’ to computers are literal and insists that their attribution to the brain is metaphorical. But I’m not sure that even computers literally process information, if we take the paradigm of processing information to be cases in which a person performs logical/mathematical operations on meaningful content. Computers manipulate symbols in accordance with rules, but the symbols have content and meaning only relative to the intentional states of conscious, intelligent beings such as ourselves. Computers aren’t conscious and I’m not aware of any reason to suppose that anything they do involves intrinsic intentionality. Metaphors become literal, so perhaps it’s just an irrevocable fact about the English language that “computers process information” is true. But if so, that’s a fact about the English language, not one that entails that what computers do and what people do are the same thing.

      But I’m quickly getting out of my league here, and I know from experience that philosophers of mind are not often inclined to be convinced by what I have to say on the matter. This is just another benefit of being a historian; the stuff I focus my energies on will still be the same in 25 years when the current fashions in other areas of philosophy have shifted dramatically.

      Liked by 1 person

  4. That is all very helpful, David – thanks. I can concur with much of what you say…

    (a) Curious how science can get results without, in any way or sense, ‘representing the world as it is’ or ‘cutting nature at its joints’. How do we avoid positions like crude instrumentalism? I’m inclined to think that, though most (good, useful, predictively successful) scientific models or theories are not literally true, they nevertheless represent things accurately in some partial way that may be somewhat opaque (absent a good theory to explain just what is going on). Some physicists say that only the mathematical equations of the theories represent nature “as it truly is.” That strikes me as a bit too extreme, but on the right track for “separating out” the accurately-representing elements that – I think – have to be what does the work of accurate prediction, etc.

    (b) Causation, not just correlation, may be what is important in picking out the interesting features that flies under the banner ‘information (in the technical sense)’. Perhaps the important feature concerns one system “imprinting” another (there, I have avoided the term ‘information’!) and this ‘imprint’ affecting the causal powers of the imprinted-upon physical system? Anyhow, good on you for catching my fudge.

    (c) I agree that one danger in using the term ‘information’ is confusing something like the above imprinting-and-affecting type feature with either (i) correlations or natural “signs” that, when cognized as such, constitute conscious information or (ii) conscious information itself (intentionality). In my words about consciousness, I was using ‘information’ in the technical sense. But that is controversial. The reason behind doing this is the idea that conscious, intentional states are simply complicated versions of relevant generic feature in the “imprinting-and-affecting” family. But this hypothesis needs to be defended, not simply assumed – and I was implicitly assuming it. Bad on me.

    It is good that you are pushing in direction that you are on this. The relevant family of hypotheses, even shorn of the flat-out confusion and questionable equation of apparently different things, are controversial, counter-intuitive and need to be spelled out carefully. And the potential equivocations need to be avoided like the plague! (Would using ‘physical information’ instead of ‘information’ to get at the technical sense of ‘information’ help at all, I wonder?) It could be that, ultimately, consciously affirming propositions (and inferring propositions from other proposition) has nothing much interesting in common with either one physical system registering and encoding states of another system (physical information) or computers doing stuff that is, or can be interpreted as, inference. And really that should be the null hypothesis, here (as some sort of dualism should be in the philosophy of mind). Ultimately, all of this is out of my league as well, but I find it interesting and fun to get as far as I can in a rigorous, accurate, humble (!) way.

    Like

    • I suppose I’m not too averse to accepting crude instrumentalism about empirical science at least some of the time, but I’d agree that it often seems to give us more than what instrumentalism would allow. But I think it can give us quite a lot of genuine knowledge even without yielding the kind of results that naive scientific realists suppose it does. There are at least three features that lead me to doubt that successful empirical science necessarily lives up to the demands of that kind of realism: the role of metaphor and similarly non-literal language, the role of (pragmatic) explanatory interests in shaping what counts as a unit of inquiry, and the fact that empirical scientific theories wildly underdetermine metaphysical theories. We’ve been discussing the first, and though I’ve made much of the difficulty of distinguishing metaphorical from literal language, metaphors and analogical models at least complicate the picture. The second seems rather more serious. While I think the same issues arise for any empirical science, they’re maybe easiest to see in psychology. Consider the categories of ‘ADHD’ and ‘authoritarianism,’ to take two examples. Both of these have been treated as a single thing that can be theorized about, and there’s been some at least minimal success in doing so, but there’s also been serious challenge to the notion that there is a single condition that causes the variety of symptoms that are currently traced to ADHD, the alternative hypothesis being that there is instead a variety of different conditions with broadly similar symptoms. Social psychologists are now apparently treating ‘authoritarianism’ as a thing, and in doing so they’ve been able to find some interesting correlations with other alleged facets of personality, but it doesn’t take much skepticism to doubt that there is really a single condition, or even a unified set of conditions, that lead someone to score high on the ‘authoritarianism’ dimension. I don’t know the literature in psychology or the philosophy of psychology very well, but some of my earliest philosophical thoughts were wondering about whether categories like these really pick out natural kinds (though I wouldn’t have put it in those terms when I was a 14 year old kid talking to my mom about her psych class). Perhaps this issue arises more for psychology and other social sciences than for biology, chemistry, and physics, but there too we can sensibly wonder whether the categories are shaped more by our interests in certain phenomena than by the real mind-independent structure of things.

      In a way, both of these are just special cases of the general issue, which is that our empirical scientific theories are consistent with a variety of incompatible metaphysical schemes, and the success of the empirical science doesn’t depend on which of the metaphysical schemes we adopt. Consider debates about material composition and just when it is that many things genuinely compose one thing. Universalism, nihilism, and most varieties of restricted composition could each be true without that making the slightest difference to how empirical science is done and which theories gain acceptance. If Peter van Inwagen is right, then you and I exist, but our cars and chairs do not; if nihilists are right, none of us exist and only simples do; I frankly don’t understand what universalists want to say about the matter (to the extent that they take it to be a matter of convention that some plurality of things composes one thing, they seem to me to be nihilists after all; to the extent that they want to claim that any given plurality of things really composes a single thing, then, well, I guess you and I get to exist, but so does the unit composed of you and I, and the unit composed of you and my left toe). We might think (I do) that some theories of composition make a better fit with science than others, but science doesn’t logically exclude any of them or point unequivocally in one direction rather than another; even if we exclude nihilism and universalism, we’re still left with a wide variety of alternatives, and empirical science simply underdetermines which ones we accept. On van Inwagen’s view, there is not literally such a thing as oxygen, there are just simple material particles arranged oxygen-wise, i.e., in ways that lead to behavior that we associate with oxygen; the fact that scientists talk about oxygen as though it were a single thing does not by itself pose any serious problems for this view, even if we think, contra van Inwagen, that the view doesn’t offer the most satisfactory account of scientific practice. Science simply need not consider the issue, and manifestly does not consider it (try getting a scientist who doesn’t happen to have a side-interest in metaphysics to even consider this question!). Whether or not the scientific theory adequately captures the mind-independent structure of the world depends on which of these theories is true, but the science has and will go on in much the same way regardless of which of them is true and regardless of what the majority of philosophers think about the matter. There are loads of similar metaphysical questions for which the same holds good; causation, famously, but just about any important metaphysical question we care to mention will be underdetermined by empirical science, and empirical science will not be greatly affected by which is true (whether or not science is affected by the metaphysical views its practitioners hold is a different question to which I think there is a different answer, but not one that I think shows that any particular metaphysics is required by science).

      These are all good reasons to resist treating empirical scientific theories as if their success were strong evidence that they capture the mind-independent structure of things. But they’re not, so far as I can see, good reasons to doubt that successful empirical science gives us a genuine cognitive grasp on the mind-independent world (though even that question is not one that empirical science can adequately answer). Even if we think that full-blown theoretical understanding requires cutting nature at its joints and all that, there just doesn’t seem to be any good reason to think that we don’t understand things at all if we don’t have such knowledge. Even metaphor can afford us genuine knowledge. At least, it had better, else we have virtually no knowledge at all.

      As for the stuff about computation and information, it’s certainly true that the dominant technical conception of information involves causation and not mere correlation. I don’t think we can seriously doubt that the model is a scientifically fruitful one: if the basic idea is of a physical system generating output by responding to input in accordance with a set of rules, then it seems clear that this is in fact a pervasive feature of physical systems and that, so far as it goes, the brain does it, my computer does it, my car radio does it, plants do it, etc. I don’t have much to add to my case for doubting the appropriateness of regarding this as information in any substantive sense. But even if we set those worries aside, it seems to me that we should be just as interested in differences between physical systems that can be described in this way, and that the fact that we can describe so many different kinds of physical systems in this way does not have any especially deep significance. It does have some deep significance, I think; it seems to bolster the case for causal realism, for taking structure seriously as a basic metaphysical principle, for endorsing realism about non-mentalistic natural teleology, etc. But even independently of the underdetermination of metaphysics by empirical science, the deep significance here would not be some groundbreaking new discovery that alters our fundamental understanding of the world. That physical entities are, by virtue of their structure, such as to generate specific patterns of behavior in response to certain selective features of their environment is something that Aristotle and Democritus would have agreed on. The language of information and computation does not seem to offer any groundbreaking new explanation of this feature of the world. If it seems to, that’s the misleading effect of metaphor; once the technical terms are fully explicated, what we have is at best a very useful general set of concepts for classifying and describing physical systems in general, not a radical new explanation of why physical systems are as they are, or even a radical new account of what physical systems are. Perhaps one could think that, even stripped of the misleading connotations of the metaphor, the model shows that all physical systems are really nothing but syntactic engines, in the relevant technical sense. But this just doesn’t follow; the fact, if it is one, that plants and people are both syntactic engines does nothing to show that plants perceive or that people don’t — it is only a crude and fallacious reductionistic inference that leads to that conclusion.

      Ahh, I clearly need to stop procrastinating and get back to translating Plato.

      Liked by 1 person

Leave a comment