A computer is a canonical example of a machine. Every machine can be described by a mathematical theory, and every mathematical theory can be automated on a computer. Therefore if you could describe something mathematically, you could also automate it in a computer. People often suppose that this means if we had a mathematical description of nature, that description could also be automated on a machine. In the case of living beings, such automation would mean that we too are automatons—machines. This post examines the issues in this argument, highlighting the holes in it.

Table of Contents

**The Classical Machine**

We commonly suppose that the universe is built up of many individual and independent machines. All these machines are ultimately governed by the same theory, although each machine incorporates different *boundary and initial conditions *upon the theory. To make a machine, therefore, you must define a specific boundary and initial condition.

From a computing theory perspective, the theory in question is the computer—also called the *Universal Turing Machine *or UTM—while the individual programs which present initial and boundary conditions upon that theory are called *Turing Machines *or TM. Each TM defines some initial and boundary conditions upon the UTM, and in so constraining the UTM by conditions, the TM becomes an *instance *of a UTM. Potentially, each TM could also run other TM’s within it, and some TMs in modern programming—such as bytecode languages like Java and C#, or interpreted languages like Python and Perl—instantiate TMs inside the TM. The point is simple: there is supposedly one UTM, which can spawn other children TMs, which can, in turn, create further child TMs, and so on.

The boundary conditions on the UTM are called *instructions *of a program, while the initial conditions on the UTM are called the *data* to the program. During program execution, the various programs can exchange data with each other thereby modifying their initial conditions, but they cannot change the program’s boundary conditions—i.e. the instructions themselves. We suppose that each TM has some *identity *represented by its logic, algorithm, code, etc. which cannot be tampered with runtime. All these TMs can exchange data with each other, but they cannot modify each other runtime.

**Classical Machines and Classical Mechanics**

This notion about computing has an analog in classical physics and it is called *elastic dynamics *where objects exchange energy with the other objects without changing their identity or boundary conditions. There is, however, also *inelastic *dynamics in which objects can join and split, thereby changing their boundary conditions. Elastic and inelastic systems correspond to two kinds of computing systems: in the former, programs can exchange data but will retain their instructions, while in the latter the programs can join or split thereby annihilating or creating new programs.

The irony of classical mechanics is that the theory is deterministic only under elastic dynamics. If particles collide to merge or split, the theory is indeterministic. Why? Determinism depends upon computing a fixed number of equations, where each particle represents an equation. If the number of particles is changed (due to merging or splitting) then the total number of equations is changed, and the system is indeterministic.

This fact has an important bearing on the computational problem, which is that if we actually allowed the programs to interact *inelastically *rather than just *elastically *(that is if we permitted a program to change the other program’s instructions and not just the data to that program) then the entire system will become indeterministic.

For instance, if two animals are denoted by two programs X and Y, and one animal eats up the other animal, we can model this interaction as X merging with Y. This interaction is inelastic, and the resulting system is therefore also indeterministic. Indeterminism implies that you cannot predict whether one animal will eat another. When you take two machines and merge them into one, there are infinite ways to merge them, and you cannot predict which way the merger will actually occur. If you cannot predict the program Z that comes out of merging X and Y, you cannot predict anything after that, since the entire system behavior can change if you change one of the machines in it.

**The Quantum Machine**

The quantum world is even more bizarre because here the indeterminism exists even within a single machine! If a quantum ensemble represents the symbols of a program, the order in which these symbols will be executed cannot be predicted by any known theory. We can state the probability of finding an instruction Q after an instruction P, but we cannot definitively state that order. Therefore, we cannot say that there is indeed a definite computer program encoded in a quantum ensemble; what we can say is that there are a very large number of programs possible, but we don’t know which one.

The order of quantum encoding is *empirically fixed *but *theoretically indeterministic*. That is, if we did a measurement, we will find a particular order among the quanta, but this order is in effect known only at the point of program execution, not beforehand. If you did a measurement you will believe that there is indeed a program encoded in the ensemble, but the program is effectively destroyed during this measurement. The idea of a *stored program *in classical computation runs into additional difficulties because we cannot determine which program is stored without performing a measurement, and if we actually did a measurement, the program would be destroyed or modified.

An additional difficulty in quantum theory is that even if two systems can exchange information, we cannot predict the time at which such an exchange will occur. So, not only can we not predict the order of events, but we cannot also predict the exact time at which they will occur. Owing to these two problems, quantum theory becomes statistical, and a quantum machine must therefore also be statistical. Effectively, we can know that there are some symbols in a program, but the order of those symbols and the time they will be executed cannot be known. This makes quantum machines also indeterministic.

Finally, exactly which symbols exist in the quantum ensemble also cannot be known because the quantum ensemble can be represented by several *eigenfunction bases*, each of which represents a different *language *or symbol-set of program encoding. You could think of this fact as an abstract encoding of a program that can then be instantiated into different languages during measurement. A program can now only be thought of as some *algorithm* that can be represented in a language, but the languages (e.g., Perl, Python, Java, C++, etc.) are only decided upon measurement.

If you thought that an object was, in fact, a program, you would be mistaken, because if you changed the measurement setup you will change the linguistic encoding of the program. The notion of reality—as it exists prior to being measured—has to be modified now. We cannot say that reality is a program, although we might say that reality is an *algorithm*. The algorithm is a conceptual and not a physical entity; it exists in an abstract rather than a contingent form. Any abstraction (or idea) has many possible contingent realizations, but they are all created upon observation. In analogy, what exists prior to measurement can be called a car, but what you observe upon measurement can be a hatchback, sedan, or sports vehicle. The act of measurement modifies the original idea, but that modification only adds to the original content.

**Implications for Biology**

The indeterminism in both classical and quantum machines has important implications for complex systems, such as living beings. In the classical case, the machine is deterministic only if the programs interact elastically—i.e. they run independently in terms of their instructions. In the quantum case, we cannot know the encoded program (the order of instructions), and the language in which it is encoded. Both these facts undermine any claim about determinism in the biological world.

A living ecosystem is not elastic—a system can create and consume other systems—and therefore even if we knew what each of these systems was *a priori *we could still not predict which way they will evolve. Furthermore, due to quantum problems, we cannot even know what those systems are *a priori*. In effect, we don’t have any predictive mechanism for describing the evolution of an ensemble of energy—not just at the level of biological beings but even at the level of physical theories.

As the systems become larger, the uncertainty grows and does not reduce. Therefore, we cannot claim to know the evolution in an approximate sense, although not in the details. The latter fact is often claimed by asserting that quantum uncertainties disappear at the level of macroscopic objects, which is contrary to quantum theory. If anything, the uncertainty of a larger system is much larger, simply because it has got a much larger number of symbols whose order and states cannot be predicted. However, even if the quantum systems became classical, the fact that an animal will eat another animal will itself make the system indeterministic because now you have to deal with a variable number of physical equations, which make the system indeterministic.

These facts are important for biology in the context of evolution because we suppose that these problems are somehow overcome to produce a direction in nature when there is no fundamental theory that allows such directedness. If we cannot predict when a radioactive atom will decay or a non-radioactive atom will emit or absorb light, we certainly cannot predict how much larger systems of interacting atoms will behave. There is simply no theory in physics today upon which a biological theory of evolution can be based. The theory that explains and predicts accurately will obviously be quite different. When such a theory is found, evolution will have a new understanding as well.

**Natural Selection is Indeterministic**

Evolutionists suppose that there are indeed material objects with definite states and that state can be known and used in scientific predictions. This supposition is wrong in both classical inelastic systems, as well as quantum systems, as shown above. You might claim that the living world is classically elastic (i.e. that it does not modify the other systems’ identities), but that premise leads to a new problem which I highlighted in an earlier post. The problem is that if these programs are independent, then many of them would also be infinite; they would now result in eternal living beings. The only way in which such an infinite program could be terminated is if it was modified by another program, but that modification also makes the entire system indeterministic.

Evolution is therefore breached on the horns of the following dilemma: if the system is elastic and classical, then there must be eternal beings; however, if the system is classically inelastic or quantum theoretic, then there is simply no predictive power. We can observe the facts of evolution, but we can neither explain nor predict them. We cannot explain such facts because why a system splits into two (or more) or merges with other systems (which it could have interacted elastically) has no explanation. We also cannot predict evolution because the system is highly indeterministic. If you suppose classical elastic dynamics, then you run into a contradiction with observations: we never see eternal beings. If we suppose classical inelastic or quantum dynamics, the system is predictably incomplete. We must choose between inconsistency or incompleteness.

**The Fallacy of Natural Selection**

It is commonly supposed that random mutations create new traits, and natural selection eliminates many of these traits. These two are believed, in evolution, to produce a direction in nature. Critics of evolution often ask one of the following two questions: (1) can there be fast-enough mutations to produce a sufficiently differentiated living being?, and (2) can these radically different beings eventually survive by selection? The issue is that if you had very quick mutations, the likelihood of survival in a stable system would be negligible. If on the other hand, the mutations were very slow, then they would not create sufficient differentiation to account for biological diversity. So, it might seem that there must be some “optimal” rate of mutation which creates sufficient diversity, while also frequently eliminating many of the incompatible mutants. You might now even go about mathematically modeling what this optimal rate should be.

I am quite certain that if someone undertook this exercise, they would find a rate of mutation followed by natural selection that explains and predicts a certain amount of biological diversity. However, I am also certain that you still cannot explain *which *species would be created and *when*. Notably, the question of diversity is different from the question of the individuality of species. When you hold a party at your home, you don’t just want to know “how many” people are going to attend, but also “which” ones. Selecting a given number of attendees from a potentially infinite possible number of attendees is not trivial. And if you cannot decide which ones are going to attend, you don’t have a party.

At any given rate of mutation, there can be a certain amount of diversity: this diversity may be too large or too small relative to observations, but the diversity itself doesn’t tell us which individual species will be created. This problem can be illustrated quite simply by the following example of peg-hole compatibility. Suppose that you have a large amount of wood, which you can carve into pegs and holes. You could carve the wooden block into square or circular pegs and holes. Square pegs will fit into square holes, and circular pegs will fit into circular holes. However, the peg-hole compatibility is itself useless in prediction unless you *a priori *know either the shape of the peg or the hole. If there are infinitely many possible holes, then the compatible pegs will also be infinite. How do we predict which holes will be real? Unless we can predict either the peg or the hole *a priori,* their compatibility is useless. Since adaptation is like peg-hole compatibility, there are infinite possible compatible conditions. Which of these conditions is real?

If you suppose that species are created by natural selection followed by random mutations, and you grant a certain theoretical rate of mutations, you can predict a certain amount of diversity. But you have no way of predicting the species themselves. Each mutation rate will produce innumerable mutually compatible alternatives, which cannot be selected. To create an actual biological ecosystem, now you need a *third *mechanism beyond random mutation and natural selection, which selects from among the innumerable randomly created, mutually compatible alternatives.

In other words, if you begin with random mutation, you need to bridge that randomness by a choice, because that randomness is *insufficiently *overcome by the premise of compatibility. You can eliminate square pegs if you had round holes, but you cannot predict whether the holes themselves will be square or round.

**The Flaw in Evolution**

Evolutionists often suppose that living beings are mechanisms—just like computer programs. There are two ways in which such programs could be conceived—as classical or as quantum machines. Both conceptions are problematic: a classical machine is indeterministic when the machines start eating each other, and a quantum machine is indeterministic even by itself. Even if you suppose that the quantum indeterminism is overcome at macroscopic levels, the result is still a classical machine which too is indeterministic. Although you can compute the rate of mutation and natural selection that will explain the amount of diversity in the living world, you still cannot predict which species would actually be created, due to the above two forms of indeterminism.

The inability to predict in evolutionary theory is well-known, but that inability is supposed to be a practical limitation in computing all the possibilities and their selection, not a theoretical limit arising from fundamental problems in current physical theories. Only when we recognize that the problems are theoretical (and not merely practical) can we see why evolution is not a conceptually sound theory that predicts and explains.