When John von Neumann introduced the idea of the “conscious collapse” into quantum theory, he committed a heresy—or at least something that would have been considered a heresy up until that point—by introducing a causal agent called “consciousness” within science. Science until that point had worked explicitly to keep mind and consciousness out of the study of the material world, and were it not for the considerable reputation that John von Neumann already had, this idea would have been deemed lunacy right away. After John von Neumann, the word “consciousness” is no longer regarded as crazy, and it has, in fact, in recent times, become the newest frontier for materialism to conquer. Like everything else in nature which has an underlying “mechanism”, we suppose that consciousness too must have a mechanism, to which consciousness can be reduced. This post explores this belief and shows that while choices do have a mechanism, a reduction is still impossible.
Table of Contents
The Problem of Conscious Collapse
The conscious collapse in quantum theory represents the classic mind-body divide, except that the “body” in question is no longer an object; it is a state of possibility (also called the wavefunction). If choices select from amongst these possibilities, then two problems arise. First, the burden of causality rests upon the choice, and one would now have to study consciousness to explain the evolution of the world. Second, we don’t understand whether consciousness is a singular entity or comprised of many parts: Is the choice an outcome of many interactions between the parts, or an event irreducible to parts, and to material possibility?
Reductionists would like to believe that conscious choices are an outcome of complex processes that occur in the brain, although if the brain is already material, and therefore in a state of possibility, then the emergence of choices from within brain processes presents a new kind of problem, beyond the “collapse” problem that could assume choice as the external causal agent. Assuming consciousness to explain collapse and explaining collapse materially in order to eliminate consciousness from science are quite distinct problems. The problem of quantum collapse is entwined with that of consciousness, because it is supposed that if we could explain this collapse materially, then we would be able to undo the heresy that John von Neumann had committed by inducting consciousness into science.
The Problem of Mental Representation
There are several serious issues in the reduction of consciousness, which I will discuss one after another. These issues begin in the problem of mental representation, or the idea that the mind represents the world. Before we can reason about the world, we must represent it. Unfortunately, the way matter is described in science cannot represent anything other than itself, because all material states are descriptions of the objects that possess that state, not of the objects they refer to.
If you measure the mass of an object, you cannot say that this mass describes the properties of another object. Similarly, all physical properties only describe the properties of the objects that possess them. If the brain was one such object, then all its properties would be properties of the brain, not references to properties outside the brain. How the brain state becomes a symbol of a world state cannot, therefore, be explained unless we acknowledge that the world states themselves are symbols. In other words, to solve the problem of mental representation, we have to change the basic ideology about matter.
Similarly, by representing the idea of a rose, the brain does not become a rose. There is an essential difference between being a rose and knowing a rose, and knowing a rose doesn’t make you a rose. That difference cannot be incorporated in the physical theory of nature, because we cannot distinguish when a physical state is its being versus the knowledge of the state of another being. Before choices can operate, there must be some ‘content’ on which they operate: e.g. a choice can accept or reject a claim about the world. But to accept or reject, there must first be some content. How a physical state becomes the symbol of an object outside the brain remains an unsolved problem.
Choice Involves Rationality
In recent times, philosophers of mind have bypassed this problem and proceeded to worry about subsequent problems, assuming that we will somehow, someday, find an explanation of mental representation. The critics of choice now argue that given mental representations, the choice is nothing other than rationality. This is an analogy to a computer, which also “reasons” with some data. Note the difference between the computer and the human, however. The computer doesn’t know that it is a representation of something else. The computer has bits, whose interpretation or meaning lies with the programmer. But, if you suppose that the problem of mental representation would be solved separately, then you can proceed to argue that choices are nothing other than rationality.
For instance, if you are driving from home to work, and there are many possible routes that you could take, the selection of these routes appears to be a choice. However, the critic of choice argues that this selection is subject to constraints such as the time you start, the time you want to arrive, the amount of speed you would like to drive at, the errands you would want to perform on the way, other co-passengers you might be picking on the way, the level of road rage you are prepared to deal with, etc. Once you take these factors into account, the choice is no longer free. The problem of choice essentially reduces to a mathematical optimization problem, such as the Traveling Salesman Problem.
This criticism is not flawed, but it misses a key point, namely, that which factors need to be optimized is not given by the reasoning itself. Furthermore, within optimization which things have to be maximized or minimized isn’t given. In short, once you define the problem to be solved, the solution is mathematical and rational. But how do you know which problem you are going to solve? The car by itself doesn’t have the problem of choosing a route. Then why do we as conscious beings have that problem?
Choice Doesn’t Reduce to Rationality
There is reasoning involved in the solution to a problem, but the selection of the problem is itself not rational. For instance, instead of commuting long distances, and trying to optimize the travel time, you might just work from home, or take a job that is closer to home. Reason helps us optimize the solution to a problem, but it doesn’t by itself help us pick the problem. For instance, the job nearer to your home might pay lesser as compared to the job far away. Do you want to have more money and lesser personal time, or more personal time and lesser money? That ultimately becomes a choice.
Similarly, rationality is a combined product of two distinct things—logic and axioms. The truth of a claim is relative to the truth of the axioms. So, in principle, you can prove anything (unless it is logically contradictory) provided you begin from the right set of axioms. To judge the truth of a claim, therefore, you need to have an axiom set whose truth is given a priori. Hence, the fact that there is some mechanism involved in rationality and the fact that choices are made rationally, don’t reduce choices to rationality for two prominent reasons: (1) rationality doesn’t decide which problems must be solved, and (2) rationality doesn’t determine which axioms must be used to reason.
Only when you have chosen a set of axioms can you define a language in which meanings can be expressed. And only when you can define a language can you have a mental representation that indicates knowledge of the world. In that sense, our sense of truth, which appears as the selection of an axiomatic basis of assumptions, conditions the language in which you think about the world. If you have a different set of axioms, you just won’t have a different set of truths; you will also have a different set of statements, and some statements can never be understood without changing the axioms.
Rationality Depends on Goals
Some people are constrained to desire more money simply because they have a larger family to maintain, that they need to pay off more mortgages, or that they want to have a higher social status in life, etc. But this reasoning begins in assumptions—e.g. that someone has a larger family, or that they want to have higher social status, that they borrowed money to enjoy that social status, etc. These are valid goals, and they become the basis on which a particular set of axioms are selected for reasoning (in order to solve specific problems), but they are not produced by reason itself. In fact, these goals must exist before an axiomatic basis is selected, which then frames the meaningful propositions before reasoning can be applied. After all, if we don’t know what problem we are solving, how can we apply reasoning to solve it? The key point is that before reasoning can be applied, we must frame some problems to be solved, or goals to be achieved.
These goals aren’t necessarily consistent. Just like reasoning involves a consistency check among ideas, and only a consistent and mutually exclusive set of ideas becomes the axiomatic basis, similarly, for reasoning to be grounded in goals or problems, there must also be a consistent set of goals. Reasoning cannot produce a consistent output if there are contradictory goals or problems. However, goals are often inconsistent.
Conflicts and the Judgment of Right
One common source of conflict in goals involves the judgment of what is right and what is wrong. If you have a personal goal to be very rich, and another conflicting goal to have a lot of personal time, then this conflict could be resolved if you could rob a bank. The only problem with that goal is that it contradicts the goals of other individuals, who would like to have only honest and hard-working people have the money they have earned. Having a consistent set of goals is, therefore, an issue of internal consistency within an individual as well as an issue of intentional consistency among various individuals.
This consistency in turn depends on some things that we consider “right” goals in society. These can include, for instance, the right to education, the right to healthcare, the right to a family, the right to a house, the right to food, the right to decent living, the right to a dignified retirement, etc. Underlying the idea that we have rights is the idea that something is morally correct. Clearly, there is nothing “rational” about these “rights”. There could also be a society in which some or all of these may not be rights, while other things might be considered rights. Politicians frequently debate the notion of “right” and “wrong”, and people choose those politicians based on their personal goals.
The point is that there is rationality involved in determining goals, but that rationality doesn’t discuss the issue of true and false based on an axiomatic basis; it rather discusses the issue of what we consider good or bad, followed by right or wrong. This rationality deals in what we consider valid goals for individuals, and by implication for society as a whole. It is important to find a consistent set of morals, which then lead to a consistent set of goals before a consistent set of axioms can be employed to solve the problems and meet the goals.
Goals Depend on Morality
Ultimately, which goals are right can only be decided if a system of moral values has been predetermined. From the standpoint of goals, these moral values are choices, just as from the standpoint of reasoning, the problems to be solved are choices. Our moral values decide what we consider right or wrong. These notions of morality are ideally expected to subordinate our notions of good or bad. By definition, happiness is good, and distress is bad. But what makes you happy or distressed isn’t a priori determined. Your notion of happiness depends on your personality, or what you deem to be good or bad for you. Similarly, our moral values are not given a priori. They too are choices in a society that decides some collective morals to be the chosen ones for that social organization of mutual peace and respect.
Ideally, our sense of right decides our definition of good, and our definition of good determines what we consider true, our notion of truth prescribes what is meaningful, and the notion of meaningfulness determines what we see and do. Each one of them involves choices, but you can push the problem of choice one step backward by basing your choice on something else. However, in many cases, these relations can be inverted. For instance, we can subordinate the question of the right to the question of good, when we become selfish, and start acting in our interest rather than the collective interest. Then, our notion of good decides our definition of right, our definition of right defines what is true, the notion of truth defines what is meaningful, and so forth. In some cases, the notion of truth might also subordinate the notions of right and good; we might say that we must follow the truth, even if it is painful to me, and hurtful to others.
A set of physical states can be given many meanings, which appear as a choice. You can push the problem back by defining a set of axioms that decide your language—i.e. the mapping between physics and semantics. Now, the selection of an axiom-set appears as a choice. You can further push this problem backward and define a set of goals or the notion of good and bad, which produce the axioms, and this selection of goals becomes the new choice. You can even take this problem a step backward and define the goals based on a set of moral values, or what constitutes our sense of right and wrong, and the selection of these moral values now appears as a choice.
What is Choice?
Choice appears in many forms. It appears in the act of trying to associate meanings with physical states, in trying to define an axiom set which fixes the truth judgments, in trying to justify the axiom selection based on a set of goals, and in trying to justify the goals in terms of some moral values. However, if you have gone sufficiently far in kicking the can of choice down the road, there comes a point where you hit a wall. At that point, you must say that the choice is not based on something outside, but something innate.
If you have kicked the can of choice, it appears that you have found a “mechanism” that explains the choice. For instance, if you are trying to explain the choice of an interpretation, you can justify it by choosing a language in which the mapping between words and meanings is fixed. By picking that language, you can claim that the problem of mental interpretation has been solved. But you have only kicked the choice by one step. This isn’t the only step, of course. You are permitted to kick it a few more steps, as seen above. But there is a limit to how many times you can push this problem backward. Ultimately you arrive at a kind of choice that simply cannot be pushed back to something more “idea-like”; it can only be pushed into something that is “consciousness”. Thus, we can say that the choice of meaning can be reduced to the choice of axioms; the choice of axioms can be reduced to the choice of goals; the choice of goals can be reduced to the choice of morality. (These are just examples; we saw above, these relations can also be inverted.) But what do you reduce your choice of morals to?
The Relation to Sāńkhya Philosophy
Sāńkhya is a theory of matter in which conscious choices are gradually objectified into matter. The first step of this objectification involves the choice of morality; this is called mahattattva. It is the idealization of how choice must operate. Once the choice of morality has been made, the choices of goals are made automatically to fulfill that goal of righteousness; this is called ego. Once the goals have been decided, a set of axioms or assumptions are determined; this is called the intellect. To judge the truth of the claims, the claims must be consistent with all the axioms. From the choice of axioms, different kinds of propositions are created; this is called the mind. Once the propositions have been produced, they are gradually objectified into things and actions; these are called the gross visible world. The choices that appear in the visible world are fixed by the mind, the choices visible in the mind are fixed by the intellect, the choices in the intellect are fixed by the ego, the choices in the ego are fixed by morality. The choices of morality are fixed by consciousness.
Consciousness, therefore, sets the world rolling by choosing a type of morality (in the ideal case). From this morality, goals of happiness are created, from the goals beliefs or axioms are created, from the beliefs, meanings are created, and from the meanings, actions, and things are created. Thus, once the definition of morality has been chosen, there is a mechanism for selecting goals. Once the goals have been selected, there is a mechanism for choosing the axioms. Once the axioms have been chosen, there is a mechanism for creating meanings. Once the meanings have been selected, there is a mechanism for converting the meanings into sensations and activities. There isn’t hence a single mechanism for choice; there are many of them. But all these mechanisms depend on a choice of morality.
These mechanisms only push the problem of choosing a level backward until we arrive at a kind of choice that cannot be pushed further backward. That choice is irreducible, and therefore constitutes what we mean by “consciousness”.