People familiar with Objectivism will remember an old article by Nathaniel Branden titled, “The Contradiction of Determinism,” (Objectivist Newsletter, May 1963). In it, he argues, not that the doctrine of free will is true, nor that determinism is false. Rather, he argues that if determinism is true, we cannot know it. And the reason we can’t know it is that, if determinism is true, no knowledge is possible at all.
The argument is that knowledge must be validated by a process of reason. Our suppositions about the world are not self-certifying. The mere presence of an idea in your mind does not establish that it is true. Therefore, we have to evaluate our suppositions about the world by means of sensory evidence and other tests, such as coherence. This must be done by a process of reason. But the process of reason cannot be realized by merely mechanical causation of the sort that is expressed by causal laws. Causal laws determine that a certain sort of event results in consequence of a certain sort of prior event, and this sort of determination is entirely different from that of seeing reasons or recognizing logical connections.
For example, an electronic calculator outputs “4” in response to “2+2=”, not because it recognizes that this is logically required, but because it is wired to do so. If it were wired differently, it would produce a different answer. If some of its wiring becomes faulty, it will produce a different answer. Of course, an electronic calculator is not very sophisticated, and so it cannot be expected to correct such errors. We can imagine a more sophisticated machine built with safeguards to protect against errors. But this doesn’t affect the central point, which is that a machine, no matter how sophisticated, does not act on the basis of reasons, but only of causes. A machine transitions from one state to the next on the basis of its previous state in accordance with causal laws. That is fundamentally different from recognizing a logical relation.
If physical causation is fundamentally different from (and incapable of) recognizing logical relations, and if recognizing logical relations is necessary for reason, and if reason is necessary for knowledge, then an entity that operates entirely by physical causation can’t know anything. Therefore, if determinism claims that every human being operates entirely by physical causation, then it implies that no human being can know anything, which includes the truth of determinism (assuming determinism to be true).
Unfortunately, Branden’s statement of the argument is not completely clear. But I think what he intended is more or less as I have stated it. Here is what he says:
Knowledge is the correct identification of the facts of reality; and in order for man to know that the contents of his mind do constitute knowledge, in order for him to know that he has identified the facts of reality correctly, he requires a means of testing his conclusions. The means is the process of reasoning—of testing his conclusions against reality and checking for contradictions. It is thus that he validates his conclusions. But this validation is possible only if his capacity to judge is free—that is, non-conditional (given a normal brain state). If his capacity to judge is not free, there is no way for a man to discriminate between his beliefs and those of a raving lunatic.
And he uses the machine example to illustrate that a machine, even a sophisticated one, would not be using reason and logic. Unfortunately, he does not explicitly contrast physical causation with seeing reasons. Thus, his complaint about the sophisticated machine is only that if its self-correcting safeguards are programmed improperly, it won’t be able to fix them. But one could make a similar complaint about a dull human—or about a smart human faced with a sufficiently complex problem. It isn’t about errors. A human equipped with reason might repeatedly fail to spot a mistake, might be uncreative in figuring out how to test a supposition, or be unable to solve some problem or identify the answer to some question of fact. On the other hand, machines are already much more reliable at identifying certain matters of fact than humans are, and at the rate AI is going, a truly general purpose problem solving and learning machine may soon be with us. But there would still be an important difference—apparently—between any machine and us, which is that a machine does not see reasons or recognize logical relations, and we do.
To get an indication of the difference I am pointing to, consider Wittgenstein’s rule-following argument (e.g., Philosophical Investigations, §§185–205). The argument is somewhat obscure (after all, it’s Wittgenstein), but it is common to suppose that Wittgenstein is issuing a skeptical challenge to say what constitutes one practice, as opposed to any other, being the correct way to apply a given rule in a novel situation. This is how Kripke (in Wittgenstein on Rules and Private Language), for example, interprets Wittgenstein. For example, suppose I am applying a rule, “+1”, to generate a series of numbers. Starting with 0, I generate 1, 2, 3, … And now suppose I get to 1000, which I have never counted to using this rule before. What number correctly continues the series? 1001, presumably. But what if I write 1002 instead? Or 10,001? Or 5? What determines that any of these is incorrect?
Kripke’s way putting the challenge is to have the skeptic suggest an alternative rule and ask what determines that the “normal” rule is the correct one, not the alternative. Thus, in the example above, the normal rule would be plus, which would dictate that when I reach 1000, the next number in a +1 series is 1001. An alternative rule might be quus, which acts like plus for quantities less than 1000, but which dictates a result of 5 for any quantity greater than or equal to 1000. Therefore, if quus is the rule, then the next number in a +1 series after I reach 1000 is 5. Now the question can be put by asking what determines that plus is the rule I am following, not quus. After all, if all of my experience up to now has been with quantities below 1000, then the sum total of my past training and behavior is compatible with both rules. What is there to show that I didn’t really have quus in mind all along, so that when I apply +1 to 1000 and get 5, that is completely correct and consistent with what I always intended?
Perhaps we could somehow identify my pre-existing behavioral dispositions or neuronal pathways and show that they would have determined me to put 1001, not 5, so that if I put 5 now, that must be the result of some performance error. But the trouble with this is that if physical dispositions or structures are to be the criterion, then there can be no nonphysical standard by which to say they are ever wrong. Thus, if I put 5 instead of 1001, despite the fact that my physical dispositions or wiring previously would have made me put 1001, how are we to say this is a mistake? Why isn’t the change part of the system? Perhaps we will want to say that certain physical structures became deformed or weakened and thus failed to perform normally, but by what standard are we to say this? Obviously, to say that the standard is that the physical system should realize the rule plus is to beg the question. Indeed, no appeal to any abstract ideal of performance, such as we might find in an engineering specification, for instance, will do, since that amounts to one more rule (like plus or quus) which the system is to follow. The whole problem is that we need a criterion by which to say what actual performance is dictated by an “abstract ideal of performance.” Therefore, to say that a physical system will be correct when it satisfies an abstract ideal, such as an engineering spec, is to beg the question.
Of course, this suggests a direct way of solving the problem, which would be to say that I understand the rule, which is plus, and that I can recognize how to apply it to numbers I have never encountered before (since the application depends on common features of the system of numbers). On this view, the rule is the criterion, and nothing further is needed. A rule is an abstract entity, which I have the cognitive ability to understand and apply. Of course, this solution depends on a lot of nonphysical talk, like “understand,” “recognize,” “abstract,” “cognition,” and for that matter “rule.”
It may be felt that this is a little unsatisfying, even if we don’t mind the nonphysical talk. Shouldn’t there be some criterion for correct application of a rule? In many cases, there might be, depending on the rule (e.g., if it is not fully specified in itself). Also, many rules, especially if they are elaborate or derivative, might be part of systems of rules that interlock, so that the violation of the one rule involves violations of others also. Nevertheless, there must be some rules that an individual can simply understand and apply without any further criterion. This is the point of Lewis Carroll’s famous article, “What the Tortoise Said to Achilles.” Suppose the tortoise has learned the meaning of the material conditional, symbolized by “⊃” (to be read as “If…, then…”). And suppose he agrees to take as given the propositions “P ⊃ Q” and “P”. But suppose he insists that he cannot see that “Q” follows from these. What can Achilles say to compel his assent? Perhaps Achilles will introduce an explicit rule, “((P ⊃ Q) and P) ⊃ Q”, and get the tortoise to agree to this rule. Now since the antecedents of the new rule, “P ⊃ Q” and “P”, are both given and accepted by the tortoise, and the rule is accepted also, surely he cannot avoid accepting “Q”. But of course, in truth if the tortoise could not see before that “Q” follows from “P ⊃ Q” and “P” alone, the new rule will be of no help. Obviously, to apply the new rule requires the tortoise to grasp the principle of conditional elimination that would have enabled him to make the earlier inference. Or in other words, to derive “Q” from “((P ⊃ Q) and P) ⊃ Q”, “P ⊃ Q”, and “P” is just a more elaborate version of the same inference form as deriving “Q” from “P ⊃ Q” and “P”. So, if he could not follow the latter, he will not follow the former. And indeed, if he could not follow the latter, it is hard to see what other rule there could be that would make him see it. The moral is that there cannot be a separate criterion of correctness for every rule. Some logical relations you have to just “see” by the “light of reason.” If there are no logical relations you can understand and apply primitively, there is nothing further to be done. There are some reasoning processes we have to be innately equipped to perform. Otherwise, the reasoning power of the individual can never get going.
(It is noteworthy that something like this is the way Aristotle proceeds to develop his theory of the syllogism in the Prior Analytics. He says that certain basic syllogisms are “perfect” in that what is stated in the syllogism alone is sufficient to make the necessity of the conclusion “evident.” Other, less evident forms are shown to be evident by relation to the perfect forms.)
I have long thought that, for all the ballyhoo about the rule-following argument, it shows no more than what Lewis Carroll already pointed out, namely that if you need a criterion of correct application for every rule, you’re sunk. Some rules and their applications we simply have the power to understand, and, from the standpoint of reason (as opposed to psychology), there is no more to be said about it. Of course, this is not the use to which Wittgenstein puts his argument. His conclusion is roughly that, since there is no internal or individual—“private”—criterion for the correct application of a rule, the criterion is public. To follow a rule is to participate in a custom or usage or institution, and this is why there can be, for example, no private language (since a language is constituted of rules). It is curious that, an individual behavioral or neuronal criterion having been rejected on the grounds that there can be no physical standard of error, a public behavioral criterion is accepted, although it seems to be subject to the same criticism. After all, on the institutional view of rules, if by some mass delusion we all started applying quus instead of plus, there would be no standard by which to say that was an error. Of course, maybe that’s sociologically correct! Maybe that’s exactly what we would do, and do do. Think of linguistic change, for example. Today’s common English usage errors become (annoyingly) tomorrow’s standard usage. But if so, then notice that the supposed proof that a private language is impossible has failed. If an ideal standard of error is not needed for institutional rules, it will not be needed for private rules either. An individual’s private rules could be constituted by his own habits of usage, and errors could be just those performances he would chide himself for, by analogy with public, institutional rules. Therefore, if there can be public rules on this model, there can be private rules also.
However, all this is rather beside the point I introduced the rule-following problem to illustrate, which is that reason is not reducible to causal processes. If there is an answer to the skeptic’s challenge in the rule-following argument, it must appeal to our having the cognitive ability to understand and apply a rule, and this ability is not reducible to behavioral dispositions or neuronal activity—or so I have argued. This is the way in which a machine, if it is governed entirely by processes of physical causation, does not see reasons or recognize logical relations, and so functions in a way entirely different from us. This is not an outré or radical idea. For instance, I think this is just the sort of view of reason that motivates the “anomalous monism” of such a pillar of analytic philosophy as Donald Davidson (see “Mental Events”). Of course, Davidson’s anomalous monism was supposed to show how the two worlds, mental and physical, can coexist despite running on entirely different principles, so that a machine can have reason after all (including freedom!). However, I don’t think Davidson’s theory succeeds in that, and my impression is that few other philosophers have been persuaded either. And Davidson’s theory, even if it were correct, still would deny that reason is reducible to physical causation.
A way of putting my point is to say that reason is not a naturalistic process. I am not particularly comfortable with the term “naturalistic.” I hardly mean to say that reason is supernatural, much less that it is incomprehensible or mysterious. Nor would I call it “unnatural.” I use the term “naturalism” because it seems to be the common term, and I haven’t thought of a better one. What I mean by saying that any phenomenon is not naturalistic is just that it is inexplicable by physical science or causal laws.
It seems to me that naturalism is deeply embedded in the current zeitgeist, to the point of not even being on most people’s minds—not even most philosophers’—as an explicit commitment. It often manifests itself just as a feeling of slight embarrassment or discomfort whenever somebody is so gauche as to violate it. That is, when they violate it explicitly. For, phenomena that at least apparently violate it are ubiquitous. Besides reason, there is qualitative experience (“qualia”), color (whether experienced or not), consciousness, intentionality, signification, and knowledge. I hope the way in which at least most of these are nonreducible is at least vaguely apparent. For some indications of what I have in mind, on qualitative experience and consciousness, see the work of David Chalmers (e.g., “Consciousness and Its Place in Nature”); on color, see the work of C. L. Hardin (e.g., Color for Philosophers); on intentionality and signification, Tyler Burge (e.g., “Perceptual Entitlement” and “Modest Dualism”); on knowledge, Timothy Williamson (e.g., Knowledge and Its Limits). I think it is remarkable that so many people (including myself most of the time) seem to blithely assume that naturalism will ultimately prevail in spite of all the phenomena that appear to violate it. For many of these, I think there is at present no realistic program at all for naturalizing them. I suppose the common assumption of naturalism is a testament to the enormous prestige that now accrues to physical science.
Be all this as it may, in the remainder I want to point out that the claim that reason is non-naturalistic is not the same as the claim that it requires free will or implies that we have free will. This means that, although I agree with Branden that human knowledge requires reason and reason does not operate by physical causation—and indeed must be in some way liberated from determination by physical causation—I don’t agree that the process of reason is necessarily free. Indeed, it seems likely to me that it is not free.
The idea that the use of reason makes us free is most closely associated, I think, with Immanuel Kant. (This is of course a bit ironic in view of Objectivism’s hostility to Kant.) In the Groundwork of the Metaphysics of Morals, Kant distinguished between what he called the “autonomy” and the “heteronomy” of the will. The autonomous will is a law to itself: it acts in obedience to laws—universal, rational principles of action—which it gives to itself. The heteronomous will takes its determination from some object outside of itself. Usually, this means some object of desire, such as to be healthy, to be admired, to be wealthy, etc. Thus, the heteronomous will is clearly not free, since it allows itself to be determined by its passions or by other aspects of its empirical psychology (or, sometimes, by irrational “ideals” it cooks up for itself by the ungrounded use of “pure reason”).
(As a side note, notice the identification of the self with the rational will, while the passions are treated as alien. This is a commonplace in thinkers ever since Plato. “You” is your rational ego. Your desires and feelings are not you. I mention this to point out that not everyone has always agreed. For example, Aristotle argues that actions done under the influence of the passions such as anger or lust should still count as voluntary because they are no less a part of you than your reason (Nicomachean Ethics, III.1, 1111a21–1111b3). He says, “What is the difference in respect of involuntariness between errors committed upon calculation and those committed in anger? Both are to be avoided, but the irrational passions are thought to be not less human than reason is, and therefore also the actions which proceed from anger or appetite are the man’s actions. It would be odd, then, to treat them as involuntary.”)
Autonomous action, by contrast, is free. “What, then, can freedom of the will be other than autonomy, that is, the will’s property of being a law to itself?” (Groundwork, Sec. III, 4:447). For the will to be a law to itself is incompatible with its being determined or even influenced by anything else. Not coincidentally, Kant’s supreme principle of morals, the categorical imperative, says precisely that the will should act only on maxims it lays down for itself as universal laws. Therefore, the autonomous will is the moral will, and to act morally is to become autonomous and therefore free. By acting morally, we make ourselves free and give ourselves dignity. For Kant, this is the payoff of morality.
But although the doctrine that reason gives us freedom is (I think) most famously associated with Kant, he is not the originator of it. John Locke had said something similar nearly 100 years earlier: “were we determined by anything but the last result of our own minds, judging of the good or evil of any action, we were not free; the very end of our freedom being, that we may attain the good we choose. And therefore, every man is put under a necessity, by his constitution as an intelligent being, to be determined in willing by his own thought and judgment what is best for him to do: else he would be under the determination of some other than himself, which is want of liberty” (Essay Concerning Human Understanding, II.xxi.49). Unlike Kant, Locke does not explicitly state that the use of reason requires liberation from any predetermination by the chain of physical causation. Nevertheless, this seems implicit in his repeated use of terms like “free”, “unbiased”, and “liberty” in connection with reason.
The linkage of freedom with reason may well go back further than Locke, but that is as far as I have traced it.
Nevertheless, as I’ve said, it does not seem to me that the non-naturalistic nature of reason means that it gives us freedom. The reason is simply that the recognition of reasons and logical relations is not particularly free. Going back to our example of addition, suppose that “2+2=4” is a truth of reason. Is reason free to disregard it? On the contrary, to the extent that it is reason, it is compelled to accept it! It has no choice. And similarly, I should think, for all reasons and logical relations. If reason consists in the power to recognize reasons and logical relations, then it is constrained by this power. It can do no more or less, rather as the visual system has no choice about what visual information to process and how to process it, once the eyes are open and focused on a scene.
This jibes with the point that belief is involuntary. People sometimes speak of “deciding to believe” this or that, but no one literally does this. You cannot make yourself believe any arbitrary proposition simply by deciding to believe it. I cannot make myself believe, say, that grass is red, by a direct act of will—and neither can you. Of course, we can say that we believe in, say, God or whatever, but that is not the same as actually believing. Rather, we believe what we have evidence and reason for, and we do not arbitrarily decide these, either. Strictly speaking, we do not decide them at all, we recognize them. There is a logical reason for this. To believe something is to think it is true. But to think something is true is incompatible with the thought that you simply decided it. Therefore, only considerations that imply the truth of something—reasons, evidence, logic—can be a basis for belief, and these are not up to us. We don’t decide them, we recognize them. (On this point, see Bernard Williams, “Deciding to Believe.”)
The kind of freedom we want when we talk about free will seems to be that we are in some way the initiator of our own decisions and actions. Reason per se does not do that. Therefore, reason per se cannot be the agency of free will. Nor does having reason guarantee that we have free will. Nor would determinism mean that we do not have reason or knowledge (though the sort of predetermination we would be subject to in that case would not be exclusively that of physical causation). It is tempting to suppose that the non-naturalistic nature of reason, its freedom from the chain of physical causation, means that reason itself is free and that we are free through it. But this is just a mistake. There can be kinds of predetermination other than that imposed by the chain of physical causation. Free will requires something more than just non-naturalistic reason.
- Branden, Nathaniel. 1963. “The Contradiction of Determinism.” Objectivist Newsletter, 2 (5): 17–20.
- Burge, Tyler. 2003. “Perceptual Entitlement.” Philosophy and Phenomenological Research, 67 (3): 503–538.
- ———. 2010. “Modest Dualism.” In Robert C. Koons and George Bealer (eds.), The Waning of Materialism, Oxford University Press, 2010: 233–250.
- Carroll, Lewis. 1895. “What the Tortoise Said to Achilles.” Mind, 4 (14): 278–280.
- Chalmers, David J. 2002. “Consciousness and Its Place in Nature.” In David J. Chalmers (ed.), Philosophy of Mind: Classical and Contemporary Readings. Oxford University Press, 2002: 247–272.
- Davidson, Donald. 1970. “Mental Events.” Reprinted in Donald Davidson, Essays on Actions and Events. Oxford University Press, 1980: 207–227.
- Hardin, C. L. 1988. Color for Philosophers: Unweaving the Rainbow. Expanded Edition. Hackett.
- Kripke, Saul A. 1982. Wittgenstein on Rules and Private Language. Harvard University Press.
- Williams, Bernard. 1970. “Deciding to Believe.” Reprinted in Bernard Willams, Problems of the Self, Cambridge University Press, 1973: 136–151.
- Williamson, Timothy. 2000. Knowledge and Its Limits. Oxford University Press.