Reading Time: 11 minutes

Attacks on free will have become fairly common. While the attackers often recognize what is at risk — namely the sense of responsibility and accountability — they are motivated by establishing the primacy of what science seems to be telling us over what we have commonsensically believed over the centuries. This post examines the critique of free will and its associated problems, showing why contrary to common belief, science does not deny free will but rather depends upon it.

The Modern Critique of Free Will

Free will is defined as our ability to choose our course of action among different possible alternatives. So, you can suppose that if you are driving to work, whether you take the freeway or the streets is a choice. The critic of free will argues that what we consider to be free choices aren’t actually free because they are determined by different kinds of practical constraints that we take into account to rationally determine the decision. For example, the decision to travel by freeway or the streets is determined by various factors such as the time it takes to reach the destination, the weather, the question of whether or not you want to stop for an errand on the way to work, news reports of accidents, or whether a road is under repair.

Do we really choose our route to work freely or are we constrained by the natural conditions, which in effect reduce the alternatives to just one? The critic of free will claims that since the decision is constrained by practical factors in the world, the choice isn’t actually free. In fact, many times all the possibilities are clearly disliked, but we pick the “best” option. Such decisions cannot be considered “free” in any traditional sense. If the selection of the best alternative is controlled by constraints, what is “free” about it?

Now, you might argue that free will operates in the personal sphere of values, rather than the day-to-day world of pragmatic limitations. For instance, you might claim that we are free to make our choices of life partners. If you are thinking in this way, think again. The critic of free will asks: Did you really choose your life partner freely or were you constrained by social status, economic prestige, political ideologies, and religious beliefs, aside from your own biology over which you seem to have no volitional control? If you applied all the factors that went into the decision to marry someone, was there any real choice, or was it all determined by the circumstances?

The critics of free will thus argue that there is a choice but it is rational and not free. The idea is quite simple: if the choices are rational, then they can be reduced to the working of a machine that appears to “think” by taking into consideration all the factors while aiming to achieve a “goal”. Once your thinking has been reduced to the working of a machine, your free will would be nothing other than the operation of a mechanical system that is tasked to solve problems, quite like a computer that proves theorems. If free will can be reduced to the material process, then we have buried the last vestiges of the “soul”.

The Materialist Reduction of Free Will

The materialist reduction of free will is achieved if what we call “free will” can be equated to rational thinking. Rationality demands that we take the shortest trip to work, pick what we think is the most compatible life partner, etc. While there are many constraints in making such choices, our so-called “free will” simply optimizes (maximizes or minimizes) the effects of such factors to find the optimal trajectory. Looked at in this way, all acts of rational thinking are similar in nature to the Traveling Salesman Problem — finding the shortest path to a destination while satisfying the constraint of passing through each city. You might think that your choices of a path in the world are actually free, but this so-called free will can be reduced to successive optimizations.

You can take a trip around the world to reach your workplace every day, or pick a partner with nearly zero commonalities, and while that would seem irrational, you must have some rational motives to even engage in such preposterous actions. For example, you may be driven by a motive to rationally prove to other people that there is indeed free will and you can do crazy things just to show that you are ultimately free.

When free will is defined as the choice of action, and action is defined as the best rational response to a given set of constraints, then the only way to prove free will’s existence is to act irrationally, and then be able to prove that that irrationality is not some deeper and hidden form of rational purpose.

Given these problems, the critic of free will concludes that there isn’t actually any free will. If you vote for a political party, you are not acting freely out of a morally innate choice, but being constrained by your need for economic prosperity, social standing, better education, health benefits, religious affinity, sexual orientation laws, etc. When you take into account all the “factors” that go into making a decision, then how is the choice free?

The Loophole in the Critique

The loophole in the above critique of free will is that it is not clear if there is indeed a fixed set of factors that must be taken into account before arriving at a decision. While we might be able to reduce a decision to an optimization problem, that optimization depends on our choosing some factors to be maximized or minimized, and it is not clear if the problem itself is well-defined by the constraints or we choose some factors that must be optimized, thereby defining what the actual problem to be solved is.

For instance, not everyone takes into account religious affiliation and social standing before selecting a life partner. There are others who might base their choice upon ideological commitments and sexual orientation. How do we know which factors are important for which people? The loophole in the critique of free will is that it is not clear if the problem to be solved is itself well-defined by the circumstances, or we choose which problems we wish to solve, thereby injecting a sense of uncertainty in the solution.

Clearly, not everyone looks at a set of facts in the same way. We don’t believe a given set of facts present a very specific problem. We might discard or compromise on some outcomes because we really want a particular kind of outcome. The challenge for the reductionist is now the reduction of the choice of the problem itself, rather than the solution of that problem.

If you are going to argue that all choices reduce to rational decisions, then you must be prepared to explain how different kinds of rationalities originate in different people. In each person, some constraints have a higher weight than others (e.g., some people may prefer sexual orientation over religious similarity while making a life-partner decision). Is this weight allocation among the various factors decided by another set of “deeper” rational factors or is this preference simply a matter of our choices?

Rationality is a Bottomless Pit

Let’s suppose that we are able to define some meta-factors that in turn decide which criteria such as sexual orientation, economic prestige, and social status, must be given what weight, and those weights can help us define the real problem to be solved, which can then be solved rationally.

But you quickly find that the choice of the meta-factors is itself questionable. How do we know that meta-factors are rational, and not merely our choices? By choosing those meta-factors, therefore, you don’t solve the problem of choice because someone is soon going to shake you up asking: Upon what deeper principles does this ground of human rationality stand?

Clearly, this is a cascading, recursive, logically bottomless pit. If you are going to use reason to explain all actions, but your rational system depends upon a choice of axioms (regardless of what those axioms are), then you haven’t truly solved the problem of choice until you explain how you arrived at the axioms themselves. As you try to explain those axioms by postulating more axioms, you end up in a bottomless pit of recursive reasoning.

There is only one way out of this bottomless pit: you must find a single axiom using which you can explain all rational action. This axiom must not be free will because you cannot postulate something to explain that very thing, and you cannot have more than one axiom because anything more than one still leaves room for choice. The only way to explain free will based on reason is if your rational system itself has only one axiom, and that axiom is not free will.

Now, if you actually try to go down this path, you will face a very simple problem: How do we reduce all reasoning to a single axiom plus logic? Isn’t the conclusion of a single axiom always that axiom no matter how many times you apply logic to it? For example, if you have a single letter ‘A’ in your alphabet, you can form statements like AAA, AAAA, AAAAA, etc. You are not adding any information to the system by repeating the same thing over and over. In effect, you now have the reverse problem of how to explain diversity.

If you happen to have more than one axioms―say two alphabets ‘A’ and ‘B’―then you can create statements such as ABABAB, or AAABBB, but you are now again faced with the problem of choice: should the letter A be followed by the letter B or another letter A? Note that you can start with a set of axioms and reasoning takes you in different directions producing different conclusions. The conclusion you produce is not just an outcome of logic, but also the direction. This direction is itself not logical, and if you arrive at a conclusion you have a choice embedded in it.

In short, if you traverse this path, you will be frustrated. You will find that either you make no progress, or your progress actually depends on making some choices, which are not in turn dictated by anything that you knew previously―i.e. they aren’t rational. You might be tempted to terminate this exercise by postulating free will as an axiom! Such a postulate seems better when you have spent some time on the problem and found that it has no exit.

Whether or not the axiom of free will is true, is beside the point. I’m not suggesting (at least not yet) that free will is an axiom, or should be an axiom in your reasoning. I’m only saying that either you have that axiom as part of reasoning or you cannot produce reasoning.

The Axiom of Choice

The critics of free will reduce choice to rationality. But, unfortunately, rationality is a bottomless pit. No one has so far been able to define that fundamental basis on which rationality itself rests, or define how we must all reason, and which dimensions must be given how much weight. I’m not talking about the big questions of ethics, morality, goodness, beauty, etc. I’m saying that even in the most rigorous of sciences―logic, mathematics, and computation―we still don’t know what the foundation of rationality is.

The only thing that we know is this: logic alone is inadequate to produce rationality, so we need to have some fundamental ideas to create rationality. These ideas are called axioms in a formal system. But whatever basic axioms you choose, it is eventually your choice that cannot be rationalized.

To illustrate this point, we only need to take a closer look at the elementary mathematical problem of trying to order an unordered set of objects. We all know that to count objects, we must order them: one, two, three, four, five, etc. To order these objects, we must pick one object, and call it the first, followed by another object and call it the second, and so forth. The problem of counting thus reduces to the problem of ordering, and the problem of ordering reduces to the problem of choice: How do you pick the first object?

You might say: Well, I’m going to pick the objects rationally by measuring their physical properties: height, weight, speed, etc. But, which property among height, weight, or speed, are you going to measure first? If you are going to order the objects, then you must perhaps first order them by height, then by weight, then by speed, etc. You might also order them by speed, weight, and height. How do you decide what to measure first?

The point is that whenever you have a set of unordered objects, the ordering requires the choice of a method — e.g. measuring height before weight, and measuring weight before speed, etc. If you cannot make that choice, and try to derive it from some other rational reasoning, you end up with another set of axioms using which you must order height, weight, and speed, but to perform this ordering, you must first order those axioms themselves.

Given these problems, set theorists decided to postulate the Axiom of Choice. This axiom says that there are infinitely many ways in which we can order an unordered set of objects (i.e. label them as first, second, third, etc.) and we will assume that there is a way we can pick from among these methods of ordering. Let’s not worry about how an individual method orders, or how we arrive at one of the many possible methods — i.e. let’s not try to rationalize this choice and reduce it to more fundamental things — because we can see that this reduction, in turn, needs more axioms and choices.

Choice Isn’t A Human Problem

The key takeaway is this: the choice is not merely a human problem and therefore does not only occur in relation to the decisions we need to make. Rather, it occurs in every attempt to order — i.e. distinguish and count — things. If we cannot distinguish and count, we cannot know, and if we cannot know that there many things, how do we form theories about those things?

We get around the problem by postulating choices; whether that postulate is true or false is beside the point. The key point is that if you are going to count things to formulate theories about nature, then you need choices. These theories may eventually describe the human body and brain and that description might appear to reduce our body and brain to the rational theory. However, that reduction does not eliminate choice, because we already made choices in formulating the theory. Since there are many ways in which we could have formed the theories (using different axioms) the mechanism we suppose explains our brain itself depends upon our original choices.

The only way we can totally eliminate choices is if there is a universal theory of nature that depends upon a single axiom which is not free will. However, that single axiom — as we saw above — will not explain diversity. The theory of a single axiom — even if it exists — will be incomplete. If, however, you decide that an arbitrary theory without the axiom of free will is the ultimate natural theory that also explains free will, your claim would be inconsistent since you made a choice of axioms to deny that there are choices.

A Dilemma for Reductionism

The reductionist reduces the free will to rationality. The rationalist then reduces reasoning to computing. The computer scientist then reduces computing to a logical machine, which the physicist reduces to atoms and molecules, which a mathematician reduces to a mathematical theory (about Hilbert Spaces), which reduces to the idea of functions, which reduces to an ordered set of numbers. Now we have done all the reducing we know how to do today, and we are faced with the problem of how to order an unordered set of objects. This ordering itself needs a method that cannot be decided rationally and must be chosen. Once you choose that method in ordering, you can see that at each successive level of reduction you must make similar choices.

You meet your destiny on the road you take to avoid it.

The dilemma of reductionism is the choice between a bottomless pit and the Axiom of Choice. If you are going to choose a bottomless pit, you not only have a lot of explaining left to do but also that you are not being self-consistent: you are making a choice but denying that one is possible. If instead, you are going to choose the Axiom of Choice, at least you are logically consistent because your action only affirms that choices are possible.

The key point is that if you want to be entirely rational, then you must axiomatize choice. If, however, you deny choice, then you are not only left with a bottomless pit, you are also logically irrational.

I’m still not insisting that free will is true! I’m only saying that if we are going to be rational about it, then we must accept the free will as a unique, fundamental, and irreducible concept.