Reading Time: 13 minutes

In the last post, I discussed the implementation of AI and its use for controlling markets, governments, and people. In this post, I will do the opposite: Talk about why AI is deficient. To reconcile these two seemingly contradictory positions, we will then discuss how AI becomes powerful if society becomes homogeneous. The fact is that society has been progressively homogenizing through the spread of capitalism, democracy, industrialization, and globalization. This makes AI powerful because diversity that complicates predictions is removed. In short, the technology of AI is useless unless the people in the society it controls are standardized to some specific norm.

Deficiencies of Artificial Intelligence

To understand the deficiencies in AI, one has to look at how it works on people. Most people are combinations of thousands of attributes, but for simplicity, I will use an example of two attributes.

Consider a person with two attributes— “rich” and “doctor”. There are certain ways in which most rich people behave—they spend on expensive clothes, cars, houses, vacations, etc. And there are certain ways in which doctors behave—they care for patients, listen to their problems, and cure illnesses; they also go to medical conferences, read medical journals, and propose improvements in medicine.

The reality is that different people have these attributes in various proportions. In Vedic philosophy, we call these dominant and subordinate. For example, some rich doctors can have richness as the dominant attribute and doctorness as the subordinate attribute. Such a person will spend on flashy cars, houses, clothes, and vacations, and pay less attention to the condition of patients, the latest research in medicine, medical conferences, and so on. He will rather focus on the shortest path to make the most money because medicine is not itself the goal; it is the route to become rich. Conversely, if a rich doctor has the doctorness dominant and richness subordinate, then such a person will donate his riches in charity, lead a simple and frugal life, and spend time curing patients, understanding medicine, and trying to contribute to its progress. The riches of such a doctor increase his focus on medicine because he is assured of personal safety. Conversely, the riches of the previous doctor decrease his focus on medicine, because he is interested in enjoying the riches and showing off those riches to other people.

As you can see, modeling the real world is very hard—even with just two attributes—because they exist in dominant-subordinate states, which people will normally call various proportions. In reality, there are thousands of such attributes, which exist in different proportions, and modeling them into a trend is nearly impossible. In Vedic philosophy, each person is unique, which means that there are infinite attributes, and each person is a combination of some such attributes in different proportions.

Deficiencies of AI Training Methods

Now, the naïve person can argue: Even if there are infinite attributes, a computer system can still model them—uniquely for each person. Social media companies—as we discussed in the last post—are already trying to do that. But there is a problem: We don’t have a definition of each attribute a priori.

For example, we don’t know the meaning of the word “doctor” or “rich” a priori. This is because computers can never represent meaning. So, AI tries to substitute that meaning with data. This means that to define the meaning of “rich”, we observe a lot of rich people—who are judged to be “rich” by humans during the “training” phase—and produce a model of richness. For the computer, “rich” simply means a certain set of effects, and “richness” is the quality that produces that behavior. The computer cannot understand or represent that quality, but it can represent its effects. So, humans understand the quality, and computers collect the behaviors, and the human designates some behavior by some words such as “rich” or “doctor”. And that interface between humans and computers is called “training”.

The problem with “training” is that there is no universal characterization of the presence of an attribute in terms of behavior. For example, when a rich doctor lives a frugal life, there is a contradiction in modeling: The person is rich, but he is not extravagant. So, should we model this guy as rich or poor? The intelligent human response would be: He is rich but that richness is subordinated. The artificial intelligence response would be that: He is not rich, because he is not exhibiting extravagance.

In naïve computer language, we could say that the presence of each attribute interferes with the other attributes such that you cannot characterize any attribute through a model unless you produce a model of the qualities such as “rich” and “doctor” a priori. And you cannot do that for the simple reason that computers cannot encode qualities; they can only model behaviors and not the cause of those behaviors. When qualities are dominant and subordinate, then reverse engineering behavior into qualities is impossible because frugal behavior is exhibited by both rich and poor people. A poor person is frugal because poverty is dominant, and a rich person is frugal because richness is subordinate. Since two opposite qualities—richness and poverty—can produce the same result of frugality, therefore, it is impossible to create a qualitative understanding of person-based behavioral modeling.

This qualitative understanding of the person constitutes what AI calls “classification”. The simple problem is that we cannot classify a person based on the observation of behaviors. However, if we know the qualities along with their dominant-subordinate state, then we can predict their behaviors.

Philosophical Origins of AI Deficiencies

We can restate this problem in a philosophical language: Knowing the cause fully determines the effect, however, knowing the effect is insufficient to determine the cause. In the words of Willard Quine, this is the problem of “underdetermination” in which the observation of effects underdetermines the cause. An example of this underdetermination is that frugal behavior can be caused both by poverty and richness. Likewise, even a poor person can borrow money from a bank, and exhibit richness for a short period of time. So, poverty and richness cannot be inferred from frugality and extravagance.

The net result of the problem of underdetermination is that we cannot “classify” a person correctly, and without classification, we cannot predict their behaviors. Whatever “typical” models we create based on a typical behavioral understanding of richness and poverty, will have many exceptions because there are so many atypical behaviors due to dominant-subordinate states, and those exceptions entail a failure in prediction, which then makes AI useless as it generates many false positives and false negatives.

The Role of Societal Diversity

However, the atypical cases are prominent only when society is diverse and qualities exist in various proportions or dominant-subordinate states. If society is homogenized, then there is a standard dominant-subordinate pattern of qualities. That homogenization means that all rich doctors will be extravagant, and all poor doctors will be frugal because the amount of wealth they have is the dominant determinant of their behavior. In short, to construct a model of behaviors, we must prior construct a standardized hierarchy of qualities—such as wealth is at the top, followed by their looks and appearances, followed by a person’s professional stature, followed by the work they do, etc. Under this standardized model, rich and good-looking people will be always talking about their richness and good looks rather than discussing their profession, and those who don’t flaunt their riches and looks and instead focus on their profession must be poor and ugly. If society can be standardized in such a model, then AI can become very successful, because now the atypical cases disappear from society. Everything neatly falls into a standard behavioral model, which makes predictions easy for an AI system.

The general principle of accurate predictions is that qualities fully determine behaviors, and when qualities exist in different dominant-subordinate states, then, behaviors cannot be reverse-engineered into qualities. However, if society is homogenized in terms of its qualities, then behavior can be reverse-engineered, encoded in a computer model, and predicted for any given input. In short, AI is successful when society is homogeneous, and unsuccessful when society is diverse. As people become standardized by indoctrination of modern science, and the acceptance of a political, social, and economic ideology, it becomes easier to control them because they are now effectively like machines—standardized, replaceable, replicable, and uniform. A person is now effectively like a nut, bolt, or gear.

The Standardization of Humanity

Every totalitarian ruler dreams of standardized humans. This is the essence of the movie Matrix in which if you surrender your individuality to the “system”, then the “system” rewards you. By that reward, the “system” tells you how a human has to behave. If you deviate from a set behavior, then the “system” punishes you. By that punishment, the “system” tells you how a human should not behave.

Over time, through rewards and punishments, you train a person to become a standardized nut, bolt, or gear. Standardization is the preliminary step toward social control. Non-standardized humans cannot be controlled easily. And standardized humans can be controlled very easily. So, as humans conform to some expected norms of behavior, they become easily controllable.

All large modern societies try to produce standardized humans by indoctrinating them through education, television, political and economic policies, and socially acceptable norms. The AI system can do the same. Even if it cannot gain a good understanding of a person, it can still “train” them by injecting moods, ideas, and judgments by showing them some images, news articles, and role models. As a person is repeatedly exposed to certain types of images, news articles, and role models he or she starts becoming standardized.

For example, if women wearing certain types of clothes are promoted and popularized as the de-facto standard of beauty, then most women will gradually accept that standard. They will feel guilty about the absence of such clothes and try to model themselves after the advertised standard. Likewise, if certain types of language, ideas, and beliefs are discouraged, then people accept them as standards and begin conforming. Thereby, a standardized human is produced in society.

You don’t even have to know what the humans are, to begin with. Instead, you conjure a predefined image of the human and feed it repeatedly to them. Over time, they will become the thing they see. Then you already know what they are, and you control them accordingly.

Standardization Overcomes Flaws of AI

Therefore, AI has fatal flaws if we think of it as modeling reality, because it cannot correctly model diversity, the understanding of qualities, and the prediction of behaviors based on qualities. However, AI is very powerful, if it is used to indoctrinate people to create a standardized human. The flaws of AI are relevant only if the reality is diverse. If diversity is destroyed by indoctrination, thereby producing a standardized human, then that standardized nut, bolt, or gear would be easily modeled by AI, giving the AI system control over people’s lives.

Therefore, the flaws of AI can be overcome through systematic indoctrination, enforcement of politically correct behaviors, homogenization of society, and the destruction of ideological and cultural diversity. In one sense, the brouhaha about the power of AI is misleading. In another sense, it is a scary truth. AI is powerless in the face of diversity. And yet, it gains power by indoctrination.

The technology giants, and the experts in AI, understand these things but ordinary people don’t. And because they don’t understand the flaws of AI, and how it can gain power via indoctrination, they fail to see why AI technology giants are so enmeshed with a certain definition of a standardized human. They know the flaws of AI, and they know that to improve it, humans have to be standardized.

How Tech Companies Push Social Standardization

In the US, for example, AI is done in Silicon Valley companies, and they have a premeditated idea about a standardized human. This human works for long hours, eats food by calorific value divided into proteins, carbohydrates, proteins, and fats, is polite toward other people, upholds gender equality, respects other races, religions, and countries, favors democracy and capitalism over other systems, believes in college education as the road to a happy life, is prepared to take large debts in order to finance his education, house, and car, and invests most of their earning through the stock market in other technology companies that advance this agenda (preferably AI itself).

With this premeditated idea of a standardized human, Silicon Valley companies finance, promote, and advocate a norm of humanity to the rest of the world. As technology permeates, society gets more standardized, which gives Silicon Valley companies the power to control and predict the better, and push for more standardization of education, culture, politics, economics, and social norms, which then further increases their power to push even more standardization. In short, by indoctrination, a standardized human is created, which gives AI the power to predict and control, which perpetuates the cycle of indoctrination, standardization, and increasing control.

This is a journey in which one starts from a position of weakness but gains immense power over time. Paying people high salaries if they conform to a predefined standardized norm is part of that process—it is a reward system that works for you if you accept the definition of a standardized human and advance it. Thereby, technology direction is now deeply enmeshed with a social, economic, political, cultural, and scientific ideology, because those at the helm of these technology companies know that their future power rests on today’s indoctrination.

Just like nuts and bolts have standard sizes, are mass-produced, and easily fitted in many types of machines, similarly, if humans can be standardized then they can be mass-produced and easily moved from one corporation, society, or country to another. This ability of a nut or bolt to fit into different kinds of machines is a “freedom” and its counterpart in society is that people can move from one corporation, society, or country to another. Thus, technology companies talk about freedom, but it is not the freedom that you and I normally imagine—i.e., to be what we want to be, to choose our lifestyle, ideals, values, and worldviews. It is rather the freedom to move within a system governed by some standards. Therefore, the discourse on “freedom” and “liberty” is highly misleading, because there are two radically different ideas about freedom: The freedom to move a nut or bolt from one machine to another vs. the freedom to be what you want to be.

The Progress of Industrialization

This trend, however, is not new. At the dawn of industrialization, people had to be fitted into factories, working on standard hours, doing standardized jobs, governed by a system of standardized rules and processes, being paid standard salaries, dressed in standardized attire, conforming to a standardized way of talking. This standardization then progressed into what a person learned in a school, how he or she was tested by standardized examinations, and how a person had to master a standardized textbook in order to succeed in examinations. This process then deepened into a definition of social norms where a person had to have a house in a certain type of locality, was measured by the title in the company he or she worked for, had to buy all their things from some standardized outlets, which placed their products in some standardized fashion. It then progressed into the standardization of what types of ideas are acceptable in academia, what political positions were acceptable for politicians, and what economic policies had to be followed lest there be retaliation from the powerful forces.

Industrialization means standardization. As time passes, you conform to more and more standards, and because society has become homogenized due to the proliferation of standard scientific theories, economic, political, and sociological norms, that we can even imagine the potential in AI to control people.

Even then, AI is not that powerful, unless people are further standardized. Therefore, AI is the next step in industrialization, which means the standardization of your personal preferences, values, moral systems, and the adoption of standard ethics. Greater industrialization entails greater standardization, greater totalitarian control, and greater loss of freedom to be what you want to be, and surrender your thinking to a system of rewards (if you conform to the standard) and punishments (if you deviate from the standardized idea of human). Ultimately, humans are also machines in an industrialized society. And because this idea is not new, and has been progressing gradually, by the time excessive control of society arrives under technology’s control, people will accept it just as they have in the past.

It is just like how industrial workers accepted standardization in factories, education, and social, economic, political thinking. Until that goal is achieved, advocates of AI will talk about how technological progress made society better in the past, how prosperity followed a new phase of industrialization, while never talking about how each individual is becoming a mechanized cog in a larger machine.

Industrialization Ends People’s Freedoms

A machine has no choice; it does what its owner wants it to do. Likewise, as humans become standardized, they become mechanized, and they lose control over their lives. They are owned by a person who owns a gigantic machine, and the essence of humanity—i.e., freedom and choice—is destroyed. You lose the idea that you have a purpose in life, that you can choose a purpose, and lead the life that you want, because you are indoctrinated by education in school, by media in your free time, by a corporation while you are working, and by the pressure to conform by your friends and family. The economic rewards you get are in return for surrendering your freedom, time, and priorities.

The difference is simply that as industrialization progresses, it encroaches upon deeper and deeper aspects of a person. Initially, when workers work in a factory, industrialization only gets to control their bodies. Then, as industrial education, it takes control of your mind. By encroaching economy, society, and politics, it dictates your self-beliefs. Then, it takes hold of your goals by telling you what is respectable and lovable in an industrial society. Then, it takes control of your moral virtues or what you consider morally right and wrong. Finally, you lose every kind of choice that made you distinctly you.

These invasions and intrusions of industrialization are spiritually devastating in the sense that as people lose control over their lives, they stop believing that they have a choice. With the death of choice, dies the idea that there is a soul. With that, dies the idea that there is a world beyond this world. And with the death of transcendence, dies the idea of God, and a relationship between soul and God. Society now becomes atheistic, simply because you believe that you don’t have a choice because you were indoctrinated to be standardized.

AI is just the next step in this spiritual destruction. It is not a new thing; it is a progression of a trend that began with industrialization. The novelty is that it is deepening the control of humanity, reducing their choices, and standardizing their likes and dislikes, cultures, political, economic, social, and scientific ideologies. Therefore, there are no “good” vs. “bad” AI. It is all bad because when it progresses, humanity is standardized, freedoms are lost, and people lose the sense of choice and spirituality. That spiritual decline outstrips whatever material benefits may accrue temporarily. And it progresses because people don’t have a spiritual vision for themselves. So, they prioritize the material benefits over spiritual well-being and let industrialization gain increasing control over their present lives.

The Role of Spiritual Education

This is where spiritual education is important: It tells us that we have a choice, that we are spiritual beings, that despite what our employer, family, friends, and education tell us, we can reject these things. Of course, such rejection is very hard, because with passing time, the pressure is constantly mounting. The more you succumb to pressures, the more you will lose the willpower to resist them in the future.

The discussion of the flaws of AI is a stepping stone in this direction because we can see that machines don’t have the capacity to process meaning or choices, and they are powerless in the face of humans who have the capacity for meaning and choice. A machine should never be able to control a human. However, the machine gains this control if we become standardized to behave in some predefined ways. The power of AI is the result of the power that we have gradually and progressively surrendered via industrialization. If we want to take back that power, then we have to reject the indoctrination and even though it seems hard, the rejection is not whimsical or irrational. It is founded on a scientific understanding of the body, mind, and the soul, how the soul is trapped in the body and mind, and how by progressively controlling the body followed by the mind, a machine can gain complete control over the soul.

The best day to begin spiritual education was yesterday. The next best day is today. As time passes, the will to reject the indoctrination decreases, and surrendering to external control seems a more comfortable alternative. Thus, things will not be easier tomorrow than they are today. They will be harder. This should be a wake-up call to anyone who wants to push this problem to the future. The system may or may not change, but you can change by seeing its flaws, rejecting the system, and protecting yourself from its invasive techniques. That ability to separate oneself from the system itself requires detachment, which is a reflection of a choice, which is a symptom of the soul.

The choice of the soul is simply attachment and detachment. When you attach yourself to something, you surrender your freedom. When you detach yourself from it, you regain your freedom. Ultimately, we cannot remain detached and we have to attach ourselves to something. That means surrendering our freedom to be controlled by someone. Who is that someone? It could be a person who wants to love you or a person who wants to use you. Since you can choose where you attach, therefore, you have a choice. Since you can choose better and worse points of attachment, therefore, there are better and worse choices. That choice is never lost or destroyed. But a person who is attached to one thing may take time to detach himself or herself from it, and attach it to something else. That time can be a spiritual practice.