Aeon has an interesting piece by psychologist Robert Epstein on why the brain is not a computer. In one sense, this is just a truism. Computers are machines made by human beings, whereas brains are animal organs that have evolved over a very long period of time; computers are made of metal chips, brains aren’t; computers aren’t neurochemical, brains are; brains can do lots of things that computers can’t (yet, anyway); and so on. This truism, though, depends on a rather imprecise, colloquial sense of the word ‘computer.’ More strictly speaking, a computer is just any device that computes, that is, “performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information.”1 In this sense, many cognitive scientists believe that the brain is literally a computer. While it is of course not a ‘device’ designed by human beings, it nonetheless performs mathematical and logical operations and assembles, stores, correlates, and more generally processes information. Indeed, to many people, and not just cognitive scientists, it might seem that the truism is that the brain is a computer in this sense.
When Epstein says that the brain is not a computer, he means that it is not literally a computer; it does not literally process information, store memories, or retrieve information. To speak about the brain in this way is to speak metaphorically rather than literally; the brain doesn’t process information any more than your spouse is the light of your life, Bach speaks to your soul, or your grandmother knows in her heart that you are a good person. This is hardly a truism. As Epstein emphasizes, it flies in the face of the dominant view in cognitive science. Of course, scientists who maintain that the brain literally processes information and the like need not maintain that when you and your computer each calculate the number of years it will take you to pay back your student loans, you and your computer are doing exactly the same thing. They do maintain, however, that you and your computer are both engaged in literally the same kind of thing, namely calculating the number of years it will take you to pay back your loans, or, more generally, processing information and performing mathematical and logical operations. What’s more, the dominant view in cognitive science is not just that your brain and your computer do some of the same things, but that most of what the brain does is information processing, and in exactly the same sense of ‘information processing’ in which your computer processes information.
Epstein doesn’t simply hold that this language is metaphorical rather than literal. He also insists that it is hindering scientific progress in understanding the brain and human cognition generally. This claim is stronger than the view that the language is metaphorical, and doesn’t follow from it. After all, we might readily concede that the computer language is metaphorical, but argue that it is a useful model that yields genuine insight. Epstein disagrees.
To some extent his argument seems to rest on the idea that simply because the computer model is metaphorical, it’s therefore literally false and hence cannot give us genuine knowledge or understanding of cognition or the brain. But he also points to some blind alleys that he thinks the model has led to, such as the idea that we store representations of our experiences in the memory register in our brains and then retrieve them when we want to remember something; the related, but less evidently plausible, hypothesis that particular memories are stored in particular neurons (an idea that he calls “preposterous”); and fantasies about our eventually being able to achieve immortality and incredible power by downloading our minds to more sophisticated hardware. Mainly, though, his thought seems to be that the computer model just misdescribes what goes on in ordinary human cognition, and misdescribes it in ways that obscure a better understanding of what’s going on. To illustrate, he points to some alternatives developed by some of the minority of cognitive scientists who reject the standard view.
A few cognitive scientists – notably Anthony Chemero of the University of Cincinnati, the author of Radical Embodied Cognitive Science (2009) – now completely reject the view that the human brain works like a computer. The mainstream view is that we, like computers, make sense of the world by performing computations on mental representations of it, but Chemero and others describe another way of understanding intelligent behaviour – as a direct interaction between organisms and their world.
My favourite example of the dramatic difference between the IP perspective and what some now call the ‘anti-representational’ view of human functioning involves two different ways of explaining how a baseball player manages to catch a fly ball – beautifully explicated by Michael McBeath, now at Arizona State University, and his colleagues in a 1995 paper in Science. The IP perspective requires the player to formulate an estimate of various initial conditions of the ball’s flight – the force of the impact, the angle of the trajectory, that kind of thing – then to create and analyse an internal model of the path along which the ball will likely move, then to use that model to guide and adjust motor movements continuously in time in order to intercept the ball.
That is all well and good if we functioned as computers do, but McBeath and his colleagues gave a simpler account: to catch the ball, the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery (technically, in a ‘linear optical trajectory’). This might sound complicated, but it is actually incredibly simple, and completely free of computations, representations and algorithms.
Of course, as Epstein notes, it isn’t quite that simple; when the player keeps moving in a way that keeps the ball in a constant linear optical trajectory, there is an almost unimaginably complex series of things going on in the brain, the nervous system, the eye, and so on. What isn’t going on, he insists, is computation.
I am, of course, not a cognitive neuroscientist, and so I am not really in much of a position to assess Epstein’s views here. They seem quite plausible to me; more precisely, I think it is quite clear that much of the computational language is not literal, and it seems likely that in the long-term progress in cognitive science will involve the displacement of the computational model. What strikes me about his piece is something a bit different, and mainly revolves around what he says about metaphor.
Epstein makes it quite clear that he sees the goal of cognitive science as “a metaphor-free theory of intelligent human behavior.” The boundaries between literal and metaphoric language are pretty fuzzy, but even so, it’s not at all apparent to me that a metaphor-free theory is something we can aspire to even in principle. Very many of our concepts are, in origin, metaphors; in fact, ‘concept,’ ‘origin,’ and ‘metaphor’ are all originally metaphors (‘concept’ formed from the Latin concipere, literally ‘to grasp together,’ ‘origin’ from Latin origo, literally ‘a (concrete, physical) rising up,’ ‘metaphor’ from Greek metaphora, literally a ‘carrying across’). Even if we rightly regard these concepts as no longer metaphoric, their metaphoric origins pose problems for the view that a metaphor is just “a story we tell to make sense of something we don’t actually understand” and that genuine understanding dispenses with metaphors. Provided that we suppose that we have genuine understanding (however imperfect) via our concepts of, well, ‘concept,’ ‘origin,’ and ‘metaphor,’ then it seems as though we have to acknowledge that metaphors can get us to genuine understanding, even if the metaphors must cease to be metaphors in the process. In other words, it seems as though we don’t come to non-metaphoric understanding (if that is, in fact, what we do) by replacing metaphors with something else, but rather by transforming them. At the very least, the history of just about any concept one will find deployed in contemporary psychology includes a phase in which the concept was formed by metaphor or related processes like analogy.2 So I doubt whether metaphor can be so dispensable as Epstein suggests.
We can see this pretty clearly in Epstein’s own attempts to move toward a “metaphor free” account of intelligent human behavior.
As we navigate through the world, we are changed by a variety of experiences. Of special note are experiences of three types: (1) we observe what is happening around us (other people behaving, sounds of music, instructions directed at us, words on pages, images on screens); (2) we are exposed to the pairing of unimportant stimuli (such as sirens) with important stimuli (such as the appearance of police cars); (3) we are punished or rewarded for behaving in certain ways.
There are, of course, long ‘dead’ metaphors all over the place here, of more or less the same kind as ‘concept,’ ‘origin,’ and ‘metaphor’: ‘navigate,’ ‘type,’ ‘observe,’ ‘instructions.’ But the notions of punishment and reward seem like perfectly live metaphors to me. Perhaps to a trained psychologist, these words have lost any essential connection to the human acts of punishing and rewarding, but even if these aren’t fully live metaphors, they’ve got more life in them than the others. To the extent that punishment and reward are essential to Epstein’s understanding of human behavior, his understanding depends on metaphor.
The stranger thing, though, is that Epstein himself seems not to notice that at least some of the language of the computer model is metaphorical as it applies to computers. In describing an exercise in which he asks a person first to draw a dollar bill in as much detail as possible from memory and then to do the same thing while looking at a dollar bill, he says that the phrase ‘from memory’ is a metaphor — and though he doesn’t quite say so, he seems to imply that he takes this as a computer metaphor. But quite obviously the notion of computer memory is a metaphor drawn from the concept of human memory, not the other way around. It’s possible, of course, that what Epstein means here is simply that we tend to think of memory on the model of computer memory as though we were literally retrieving a physical item from a storage space. But whatever he thinks, one of the things that complicates his whole view is that many of the concepts we apply to computers were in fact originally drawn from concepts we applied to human cognition.
Human beings quite obviously process information and engage in mathematical and logical operations. Computers can do these things because human beings, who could already do these things, designed them to do them. This is not to say that computers engage in these operations and process information only metaphorically and not literally, and of course many other notions that we apply to computers are not derived from the concepts that we already applied to human cognition before inventing computers. But the very purpose of a computer is to simulate cognitive processes that human beings perform. This not only makes it entirely unsurprising that computers have become a dominant metaphor for cognition; it makes the computer model importantly disanalogous to the other metaphors that Epstein singles out as historically influential: ‘spirit’ (breath), hydraulics, machines, electricity, chemistry, the telegraph. Some of these weren’t invented at all, and none were invented for the purpose of performing the cognitive tasks that we could already perform. So if the computer model is really just a set of metaphors, it is a set of metaphors that were, in turn, created as metaphors on the basis of concepts already applied to cognition.
Just as importantly, these concepts — processing information, performing logical and mathematical operations, etc. — were not in the first instance attributed to brains, but to people. This, it seems to me, is where the real problem with the computer model of the brain lies. Despite the penchant of philosophers and scientists for what Maxwell Bennett and Peter Hacker call the ‘mereological fallacy’ — “ascribing to the constituents of parts of an animal attributes that logically apply only to the whole animal” — it is blindingly non-obvious that the brain, as such, is what engages in information processing or mathematical and logical operations when a human being sits down to, well, process some information or do some math or logic. In fact, however overwhelming the empirical evidence may be for the causal relations between brain activity and human cognition, it’s nothing close to clear that the brain as such, or any part of it, is properly taken as the subject of any distinctively mental predicates. Neither neurons, synapses, and the like nor any of the processes and events of which they are the primary subjects appears to display any intentionality or consciousness, the two features most widely thought to distinguish the mental from the non-mental. Neither does it make any obvious sense to attribute beliefs, desires, intentions, or thoughts to them. Pointing to the apparently ubiquitous causal connections between brain processes and mental states and processes provides no more warrant for claiming that the brain or its components believe, desire, intend, think, or perform mathematical/logical operations or other kinds of information processing than the ubiquitous causal connections between processes in the eye and visual perception provide warrant for claiming that the eyes see, or the ubiquitous causal connections between processes in the ears and auditory perception provide warrant for claiming that the ears hear.
In many contexts, such language is innocuous; there’s even a term for it in classical rhetoric: synecdoche. But what makes for a suitable rhetorical figure does not necessarily make for a suitable proposition of science or metaphysics. Though I have my misgivings about Bennett and Hacker’s Wittgensteinian approach to the philosophy of mind, they seem exactly right to insist that the concepts we routinely employ to understand human cognition apply to whole animals and not to their parts. It would be one thing if we were to uncover empirical evidence that our brain believes, desires, intends, and thinks in exactly the same sense that a whole human being believes, desires, intends, and thinks. But there is no such evidence, or if there is, nobody is pointing to it. Instead, there is a lot of extremely interesting evidence about what goes on when people believe, desire, intend, think, and so on, coupled with a slide from whole to part. Aristotle had already observed in a related context (De Anima 408b) that it is a mistake to infer that the soul “pities or learns or thinks” on the grounds that the human being does these things with or by means of the soul; Bennett and Hacker’s critique of the mereological fallacy in neuroscience is simply a more sophisticated and elaborated riff on a point that Aristotle made nearly 2400 years ago.
Of course, there is hardly a position that some philosopher has not defended, and some philosophers (such as Searle and Dennett, in their responses in the Bennett and Hacker volume) have defended the attribution of intentionality to the brain itself. I have no pretensions to being able to refute two such formidable philosophers in any forum, let alone a blog post. But it seems to me that the real issue here is not whether the computational model of the mind is metaphoric, but whether it is synecdochic. Epstein and Bennett/Hacker seem to present complementary arguments for rejecting the claim that the brain literally computes, processes information, or is the subject of other mental states, processes, and activities. Even supposing they’re right, the computational model may be scientifically useful despite being synecdochic rather than literal (indeed, usefulness seems to be all that Dennett ultimately wants to claim for it). In fact, it might be more useful, because to understand it as synecdochic is to understand it as engaging in a figure of speech that is not literally true.
Metaphors, of course, are also not literally true, but while they can eventually lose their metaphorical character without embodying errors, the boundary between literal and metaphoric speech is difficult to determine. Hardly anybody supposes that concepts involve literally grasping things together in the way that I might grasp several pens from my desk, but the idea that conceptual thought involves grasping a plurality of things as a unity would likely not strike most people as metaphoric; in some sense it is, but in some difficult to specify way it is obviously different from the claim that your husband is your rock. By contrast, a claim like “the brain processes information” is either synecdochic or it is not; either we are simply using a figure of speech to attribute to a part (the brain) what properly belongs to the whole (the animal) or we are claiming that the brain is the thing doing the information processing. Theoretical clarity may be best preserved by avoiding synecdoche altogether, and that, in contrast to avoiding all metaphor, seems like a viable option, at least in principle. But the more important theoretical question seems to be whether it is the brain that does these things, not whether it does them literally or metaphorically.
There is one more striking feature of Epstein’s piece that I can’t resist mentioning. Epstein seems to think that up to now all we’ve ever had for thinking about intelligent human behavior is a bunch of metaphors, stories that “we tell to make sense of something we don’t actually understand.” But Hackett and Bennett’s critique of computationalism seems more incisive than Epstein’s, and, as we’ve seen, Hackett and Bennett are more or less just rehashing Aristotle. In other words, Aristotle had already appreciated the fundamental problem, and he did not need any knowledge of the brain to do so. What this suggests to me is that unlike anything having to do with the brain, intelligent human behavior is not something that we can understand only with the aid of distinctively modern scientific knowledge. The point is not that modern scientific knowledge cannot or does not significantly expand and deepen our understanding of ourselves. Nor is it that Aristotle’s psychology is true or adequate, even leaving aside the obviously mistaken physiological dimension of it. It is, rather, that Aristotle (and plenty of other pre-modern philosophers whose theories of psychology are incompatible with his) already operated with concepts that we can still recognize as applying to ourselves in a way that none of the outmoded metaphorical models that Epstein discusses can.
Whether or not we accept Aristotle’s (or Plato’s, or the Stoics’, or Aquinas’, or Hume’s) views, we can recognize ourselves as subjects of desires, beliefs, emotions, intentions, and thoughts in a way that we cannot really recognize ourselves as spirit breathed into dust, as hydraulic systems of the four humors, as very complex mechanical automata, or as telegraphs. There is more self-understanding to be had even from intelligently rejecting the psychology of the Republic, the Nicomachean Ethics, or Augustine’s Confessions than there is from a similar rejection of the literal truth of Genesis or from discarded scientific and pseudo-scientific theories. There are, of course, plenty of philosophers and neuroscientists who will dismiss it all as mere ‘folk psychology’ and offer up a contingent historical narrative to explain why we can still take Aristotle’s or Hume’s accounts of human action seriously in a way that we can’t take the hydraulic theory seriously. But perhaps, for all the celebrated opacity of the self, it is simply easier to understand intelligent human behavior than the physiology of the brain.
2. I have no sophisticated views on the relationship between metaphor and analogy, which seem to be treated quite differently by linguists, rhetoricians, and philosophers, and by different traditions in each discipline.