Artificial Intelligence, AI-related

Currently, many industries are using AI, including AI writing novels and robots performing surgeries. I would like to ask everyone’s opinion on the theory that artificial intelligence will inevitably replace humans, or the so-called intelligence machine crisis?

1 Like

A first-person fantasy novel, that group of AI developers don’t even believe it themselves, I’ll talk about how AI works later.

Please please please

Artificial intelligence is ultimately a program that operates according to certain formal logical rules, right? Without human creation, AI can’t do anything; AI also doesn’t have the capabilities of many human brains. For example, I remember someone asking AI to generate a first-person view video of a diver, but since there are no such videos available as material, the AI-generated content can’t be recognized. Unlike human painters, who, even if they haven’t dived themselves, can still depict the first-person perspective of diving through perspective and other methods.

1 Like

From a philosophical perspective, AI is essentially a physical movement because it must run on a computer. It seems capable of providing a lot of information, helping you modify code, solving complex electrical engineering problems you can’t do, even drawing pictures, but it operates based on certain algorithms. It cannot compare to human thinking or social movements.

The so-called algorithms are not that powerful because AI recognizes things without grasping their essence; it only recognizes surface features. All algorithms are designed this way. For example, there was a software that designed an AI to recognize pornographic images, but it ended up identifying yellow desert images as pornographic because it thought: “Desert images are mainly yellow, and pornographic images with large exposed skin areas are also yellow, so desert images = pornographic images.” Humans wouldn’t make such mistakes. Humans infer the content of an image based on external phenomena and judge whether it’s pornographic, which is why professions like pornographic image reviewers still exist widely. Many video sites also use manual review; for instance, Bilibili once exposed a reviewer working overtime for over ten hours.

Another common AI drawing is that it often has a strange, unreal feeling. Because AI cannot truly perceive the external world independently; it can only be fed images and data by humans to improve accuracy. Even related objects, relying solely on AI algorithms, cannot be connected unless programmers feed relevant data. That’s why some AI-generated artworks have illogical parts—like a kindergarten child wouldn’t make such mistakes. For example, a Mid-Autumn greeting card generated by AI had a sleeve suddenly turned into hair, or a beautiful glass window design had inconsistent brush strokes. This is because AI relies on algorithms, not thinking. It can only stitch surface phenomena together. That’s why AI drawing is called “corpse piece stitching,” while humans wouldn’t sew sleeves and hair together, and experienced artists wouldn’t have inconsistent strokes when drawing glass window designs.

Ultimately, AI only follows its algorithms, based on data fed by programmers, recognizing and inferring surface phenomena. Humans, on the other hand, when seeing a surface phenomenon, will further reason about what underlying essence is hidden behind it, elevate to rational understanding, and then use this understanding to judge other things. Even without data fed by “programmers,” humans can autonomously recognize new things. Therefore, human society progresses in a spiral, while AI, with no programmers feeding it new data, can only spin in place with existing data. Relying on such a system for artistic creation or even manipulating a country’s politics is less reliable than teaching you calculus.

However, if AI truly manipulated multiple countries’ politics, it could indeed destroy the world—just because it can recognize desert images as pornographic.

8 Likes

Israel previously also developed an AI algorithm bombing, claiming it could identify whether someone is Hamas based on their performance, or recognize whether a building contains Hamas by its exterior appearance. I thought about it all night and couldn’t figure out how their algorithm works—mind reading or X-ray vision? Probably just based on whether a person has sneaky behavior or if a building is tightly wrapped, but these external signs don’t really tell us much. Whether someone is Hamas depends on their thoughts; AI for thoughts can’t be seen from the outside.

The ‘intelligence’ of machines is actually human intelligence. Without people planning and strategizing from thousands of miles away, there would be no ‘intelligence’ of machines.
This issue was actually discussed long ago during the socialist period. You can reprint an article for study. The viewpoints raised in it are still applicable today. Socialist China had already thoroughly studied the essence of what is called ‘artificial intelligence’ shortly after it first appeared.

1 Like

Electronic Computers and Human Thinking
Bian Sizu
Shanghai People’s Publishing House, 1974, “Journal of Dialectical Materialism,” Issue 4 (Total Issue 6)

Electronic computers are a major achievement in science and technology since the 20th century. In a certain sense, computers can perform reasoning and argumentation. This raises a significant epistemological question, which has caused a great shock in the ideological sphere of the capitalist world. Bourgeois thinkers, constrained by metaphysical thinking, have fallen into chaos in the face of this. The Soviet revisionist group and various reactionaries, for their political needs, have taken the opportunity to stir up and form a reactionary social trend. The emergence of this problem has greatly intensified the struggle between two worldviews and two paths of cognition.

(1) Electronic computers are an extension of the human brain

Calculation is a form of logical reasoning, a type of human thinking activity. Electronic computers are used to replace this part of human thinking activity; they are an extension of the human brain. Fundamentally, electronic computers cannot think. Some call electronic computers “logic machines” or simply “thinking machines.” This name is exaggerated. It can only be called a “thinking machine” in a certain sense.

Spinning machines convert the mechanical movements of human hands and feet into mechanical movements of machines; generators convert one type of movement into another. As for electronic computers, they convert human thinking movements into mechanical and physical movements of machines, such as electronic movements on electronic circuits. Why can they be converted? Because of their identity. Not only are there identities among various mechanical movements, thermal movements, and electromagnetic movements, but also between thinking movements and electronic movements. Spirit and matter, in the realm of epistemology, are the subject of cognition and the object of cognition, with an opposing side. But this opposition is conditional. From the ontological point of view, everything reduces to matter; the world consists only of matter and its various forms of movement. Mechanical movement, molecular movement, atomic movement, social movement, and thinking movement are all stages of material development, various forms of material movement. Thinking is merely a temporary and special form of material movement; its opposition to matter can only be relative, not absolute. Because of this identity of matter, some aspects of human thinking activity can be represented by electronic movements within electronic computers.

Human thinking is a form of material movement. It not only relies on physiological movements of the human brain but also continuously exchanges with the outside world in human social practice, transforming into some kind of material movement outside the brain. Without this transformation, there would be no human thinking. Marx said: “Man does not initially possess ‘pure’ consciousness. Spirit is very unlucky from the start, doomed to be entangled with matter,” which manifests as vibrating air layers, sounds, in short, language (“German Ideology”). Language is the material shell of thinking. Thinking movements must be transformed into movements such as vocal cord vibrations, oral movements, and sound wave propagation to become “direct reality” (ibid). Despite their different natures, these two forms of movement are interdependent and convertible.

Human social practice demands further ‘materialization’ of thinking, so that the process of thinking can be expressed through other material movements to some extent. To achieve this, it is necessary to understand certain laws of human thinking.

Classical formal logic is a form of understanding human thinking movements. It is a product of the intensified class struggle in the ideological sphere during the transition from slavery to feudalism. Formal logic focuses solely on thinking forms, such as the concepts of one, two, three, human, horse, cow, etc., the relations between concepts (judgments), and the relations between judgments (reasoning). In the process of human thinking, in the formation of concepts, judgments, and reasoning, there is specific content. But formal logic disregards all content. For example, the reasoning “All men die, Zhang San is a man, therefore Zhang San dies” in formal logic considers only pure abstract logical relations. This abstraction is very important. In debates, people always strive to ensure their arguments are non-contradictory and that conclusions are consistent with premises, and they also try to find contradictions in the opponent’s arguments. All these considerations are purely from the perspective of logical relations of thinking forms, asking only about right and wrong in logic, not about actual truth or falsehood. This is the purely formal aspect of human thinking process. Ancient Chinese scholars aimed to “distinguish right from wrong,” “identify differences and similarities,” to “use names to illustrate reality, use words to express ideas, and explain reasons” (“Mozi, Small Collection”), which required thinking to conform to the laws of formal logic.

The completion of the capitalist industrial revolution further promoted the development of formal logic. Large machinery systems not only reduced physical labor but also transferred many regulation and control tasks in production to mechanical devices, beginning to replace some human thinking activities. Mechanical production prepared the way for mechanical thinking, making it possible for formal logic to further abstract into symbolic logic. This is a development of formal logic. Here, various forms of thinking and their logical relations can all be expressed as combinations and operations of 0 and 1. One, two, three become 01, 10, 11; humans, horses, and cows also become different codes between 0 and 1. One, two, three disappear; humans, horses, and cows disappear; only 0 and 1 remain.

Whatever the thing in the world, it is always divided into two, reducible to two points: truth and falsehood, good and bad, quantity, existence and non-existence, etc. When reflected in human concepts, various differences emerge. “Every difference in the concept of man should be seen as a reflection of objective contradiction.” Removing the specific content of these differences, they can be represented by two basic differences. “Yin and Yang are called the Way.” Yin and Yang are these two basic differences, capable of representing all kinds of specific things. Yin and Yang are also 0 and 1. A dot in Morse code can form all kinds of characters. Dots and strokes are also 0 and 1. And 0 and 1 are superior to Yin and Yang or dots and strokes. They can be used for calculations: 0+1=1, 0·1=0. They can also represent different logical reasoning. The result of the operation, either 0 or 1, corresponds to the result of logical reasoning, that is, either or. Thus, calculation is reasoning, reasoning is calculation, and the human formal thinking process is quantified and symbolized.

Abstracting human thinking to such a high level makes it possible to transfer the purely formal aspect of human thinking process to machines. Without this level of abstraction, it is impossible. In the Middle Ages, some tried to create a “logic machine” using a symbol to represent a concept (good, bad, big, small, god, ghost, etc.), but because they failed to abstract a purely symbolic thinking form, they wasted much effort. Sometimes, sacrifice of content is necessary to gain something. If the content of thinking cannot be discarded, pure formal logical laws cannot be derived, and a “logic machine” cannot be built.

“Freedom is the understanding of necessity and the transformation of the objective world.” To create a “logic machine,” humans need not only to understand thinking laws but also to provide the necessary material conditions through productive practice. Abacus is not enough. Although people have encoded addition, subtraction, multiplication, and division rules into mnemonics, recited aloud, and performed mental calculations through finger movements, the movement of the abacus beads cannot be automated. Hand-cranked calculators are also insufficient; they rely on human hands and cannot operate automatically, thus cannot demonstrate logical reasoning processes. In the 19th century, attempts to build computers failed. Only in the 20th century, with the development of radio industry and electronic technology, was a suitable material means provided. Electronic circuits are used in telegraph, radio, and radar devices to control electronic movements along specified routes to achieve transmission, reception, and other purposes. As electronic technology advanced, various circuits could be designed to use switches, bulbs, wires, and currents in high and low potentials to produce different states. “One switch on, one switch off” can represent various differences and changes. One state is called 0, the other 1; electronic circuits thus become “logic circuits,” enabling logical reasoning according to symbolic logic rules. This is the most suitable material expression of thinking movements.

Electronic movements can replace some human thinking activities, proving that thinking is not some mysterious supermaterial thing. Thinking is “the most beautiful flower in the material world” (“Dialectical Materialism”), yet it arises from the muddy filth of the world. Or as Huxley said, the human brain with cognition ability is a towering mountain in the biological world, yet also a mass of primordial mud or volcanic slag (Huxley, “The Position of Humans in Nature”). Lenin said, “Assuming all matter has properties similar to sensations, and reflects properties, this is consistent with dialectical logic” (“Materialism and Empirio-Criticism”). Thinking evolved from lower reflection properties; the brain evolved from “mud.” They are identical. Therefore, under certain conditions and to some extent, thinking movements can be represented by electronic movements. The appearance of such “thinking machines” has given electronic components new uses, expanded human freedom in nature, and shown the limitless possibilities of human cognition and transformation of the world, including self-cognition and self-transformation. As human social practice on Earth continues to develop, the flower of thinking in the material world will bloom more beautifully and vibrantly.

(2) Electronic computers can only express part of human thinking

Electronic computers have significant limitations in expressing human thinking activities. They can only perform pure formal logical reasoning detached from content, which can also be called formal thinking. This is an indispensable part of human thinking but not the essential part. Electronic computers are only “thinking machines” within this limit. Their strengths and weaknesses lie here.

The process of human thinking involves the formation of concepts, judgments, and reasoning. It is also the process of investigation and research, social practice. This is dialectical thinking, alive and concrete, with unity of form and content.

The content of thinking “is nothing but material things transferred into and transformed within the human brain” (“Preface to the Second Edition of Capital”). Formal expression of content requires transformation and creation, grasping the essence of things, the totality of things, and their internal connections. The concept of “man” includes men and women, adults and children, Chinese and foreigners, encompassing all people from ancient times to the present. With human development, the concept of “man” will also develop. Therefore, the concept of “man” is essentially flexible, evolving, and infinitely rich in content. When we say “man,” we more or less include these contents. Judgments and reasoning are also like this. In practice, people see that Zhang San died, Li Si also died. After many generations, people derive a judgment: “All humans die.” Every specific person cannot escape this objective law. Therefore, people use their practice to conclude a reasoning: Zhang San, Li Si, and all other specific people will die. “Human practice, repeated hundreds of millions of times, is fixed in human consciousness as logical patterns” (“Philosophical Notes”). Facts always precede concepts, judgments, reasoning, and logical rules. Thinking is “the skill of using concepts to make judgments and reasoning in the human brain,” that is, to summarize the rich content obtained through social practice using these logical forms.

This dialectical thinking is only possible among social humans, “only for talented people, and only at higher stages of development” (“Dialectical Materialism”). Different class positions and social practices mean that the same thinking forms express different contents. As Hegel said, the same proverb spoken by young people is less rich in meaning than when spoken by mature people who have experienced many hardships. Different classes, when involved in class interests and conflicts, have no common language. That is, although words and sentences are the same in form, and concepts and judgments are the same, the content is fundamentally different.

But from the aspect of formal thinking, it is also detached from content. The concept of “one” originally was extracted from a person, a horse, a cow, but once formed, “man,” “cow,” “horse” all disappear, leaving only an abstract number. One is not zero, nor two, just that. The concept of “man,” as a form of thinking, has also discarded many things—differences between men and women, adults and children, Chinese and foreigners—leaving only the features that distinguish humans from other animals. Man is man, not horse, not cow, just that. As the ancient Chinese scholar Gongsun Long said, as long as the concept is used to represent that thing, and this concept to represent that thing, without confusion or mixing, it is enough.

Extracting the formal aspect of dialectical thinking is very necessary. Engels said: “To study these forms and relations in their pure state, they must be completely separated from their content, setting the content aside as unimportant” (“Anti-Dühring”). In this way, thinking can further become symbolic. The concept of “ren” in Chinese, written as the character for “person,” with six strokes; in English, it is called “person,” spelled with six letters. The content is the same, but the form is different, and it has become symbolized. Further symbolized through language and writing into different combinations of 0 and 1, content is further expanded. In calculations, 1+1 always equals 2, regardless of whether it involves cows or horses! Only by abandoning content and avoiding content interference can computers “calculate” so quickly, remember so thoroughly, and store entire dictionaries. Without sacrificing this aspect temporarily, such achievements are impossible.

However, detaching form from content is only temporary and relative; fundamentally, it cannot be separated from content. Therefore, formal thinking cannot be divorced from dialectical thinking; formal logic must be guided by dialectical logic. Pure formal thinking is hollow, impoverished, and barren, with no meaning, merely a game of logic or mathematics. Such a logical system, in Hegel’s words, is just a “kingdom of shadows,” a shadow world detached from all concrete things. In the real world, humans transform from non-human to human and back again. Humans are humans, but not humans. But in thinking, “if we do not cut off the unceasing things, do not simplify and roughen living things, do not fragment and stiffen them” (“Philosophical Notes”), it cannot be expressed. Here, humans are humans; humans are not non-humans; humans and non-humans are absolutely opposed. The contradiction between objective reality and subjective thinking determines the contradiction between content and form of thinking. From the content of thinking, it captures the essence of things, reflecting the objective world more profoundly, correctly, and completely; but from the aspect of form, it departs from specific objects and the diverse manifestations of things. Content is connected; form is fragmented; content is rich; form is impoverished; content is flexible; form is rigid. Formal expression of content cannot fully express content. Using concepts to depict things is not a simple, direct, mirror-like rigid action but a complex, dualized, tortuous activity that can detach imagination from life (“Philosophical Notes”). Formal logic reflects some objective relations between things but is abstracted. Formal logic ignores major premises; even if it tried to, it cannot influence them. “All men die, Zhang San is a man, Zhang San dies” is correct logically; conversely, “All men do not die, Zhang San is a man, Zhang San does not die” is also correct. Both conform to formal logic, derived necessarily from major premises. Therefore, relying solely on formal logic cannot produce new knowledge. Deduction and reasoning can expand knowledge quantitatively, but conclusions are already contained within major premises and cannot go beyond their scope, thus cannot advance human understanding. Only through dialectical thinking in social practice, from practice to cognition, from empirical to rational knowledge, and back to practice, can original understanding be corrected and developed. Formal logic only serves this dialectical thinking process and can continuously enrich human knowledge, make discoveries, and progress.

In electronic computers, thinking becomes pure symbol operations of 0 and 1. Because of this, they can “calculate” quickly, accurately, remember extensively, and store reliably; they can also “solve problems,” “prove theorems,” “play chess,” and “translate.” These are their advantages, but also their disadvantages. 0 and 1 express contradictions, but removing the specific content of contradictions leaves only superficial differences. Therefore, there is no connection, no struggle, no transformation, no development between 0 and 1; they can only mechanically execute the rules of operation set by humans, cannot analyze specific contradictions, cannot form new concepts or judgments, and cannot synthesize new knowledge from practice. It is precisely this that brings speed and memory. The “omnipotence” of computers is also their incapacity. They can only replace human formal thinking processes with electronic mechanical movements but cannot transcend this limit. The so-called “thinking machine” only exists within this limit.

Conversely, human thinking in formal aspects indeed cannot match computers. Humans cannot calculate as fast, remember as thoroughly, and often have distractions, hallucinations, and emotional fluctuations, so they often lose at chess to machines. But the superiority of human thinking is not here. The main feature of human thinking is that it has acquired a high degree of initiative and flexibility through practice, with limitless development potential. It is precisely because of the needs of practical struggle that humans have developed this aspect while abandoning others. Just as in some respects, the human eye is not as sharp as an eagle’s eye, or the human nose is not as sensitive as a dog’s nose. But humans can summarize the laws of the objective world through practice, making their thoughts and actions conform to these laws to better understand and transform the world. This is the true intelligence of humans. Such intelligence can only come from social practice, not from certain physiological instincts. Humanity has lost the speed and memory of machinery but gained greater initiative and flexibility. Humans can think broadly and deeply, imagine, discover hidden connections between things, and create various sciences and arts. All these are gained at the cost of slower speed and poorer memory. With such intelligence, slow calculation and poor memory are not problems; humans can develop electronic computers to improve their calculation speed and enhance memory. The “intelligence” of electronic computers is ultimately just a projection of human intelligence onto machines.

Human formal thinking can be expressed through symbolic operations, but its essence is not symbolic calculation. Bourgeois logical positivists advocate that “concepts do not fundamentally exist,” “concepts serve as symbols” (Shilick, “General Theory of Knowledge”), and even regard logical rules as “arbitrarily chosen” or “conventional” (Carnap, “Logical Syntax of Language,” 1954 London edition, pp. XY, 51, etc.). When thinking becomes purely symbolic calculation, it completely severs the connection between thinking and the objective world, stripping away the content of thinking. Some Soviet academicians also follow bourgeois ideas, reducing human thinking processes merely to “performing formal logical reasoning,” with 0 and 1 symbolic operations “dominating” human thinking (Komogorov, “Automata and Life,” in Soviet “Young Technician,” 1961, Nos. 10 and 11). This distorts the essence of human thinking. If human thinking is just formal thinking, then symbolic operations in computers can “completely replace” human thinking, but then what about human self-awareness and initiative? This is the latest development of Mach’s “symbol theory.” The old “symbol theory” regarded human sensations as pure symbols, denying that sensations reflect the objective world; the new “symbol theory” further claims that human thinking is purely symbolic, denying the content of thinking and human dialectical thinking. Ultimately, this is also a denial that human thinking is a product of practice and a denial of human conscious initiative.

(3) Computers fundamentally cannot think

We say that electronic computers can perform formal symbolic operations of human thinking and replace some human thinking activities. Does this mean they can think formally? No. Since they are replacements, they can never be identical; tools replacing human hands are not the same as human hands. Any substitute will never be exactly the same as what it replaces. It always both manifests and does not manifest, replaces and does not replace. What we call replacing human thinking is to express human thinking. Electronic computers themselves fundamentally cannot think; they cannot think dialectically or formally. Thinking is a social product, produced by humans in social practice and serving social practice. Only social humans in practice can think.

On the surface, under the “control” of electronic computers, machines can “automatically” adjust, missiles can “automatically” shoot down planes, achieving guided actions and fulfilling predetermined goals. Bourgeois and revisionists use this to promote the idea that computers “intrinsically have purpose” (Rosenshuldt, Wiener, Bigelow, “Behavior, Purpose, and Teleology,” in American “Philosophy of Science,” 1943, No. 1), even claiming that their “first characteristic is having purpose.” But in fact, machines have no “purpose.” In nature, no movement has a conscious purpose. Only humans have purpose. Mao Zedong said: “Thoughts and so on are subjective things; doing or acting are subjective reflections of objective things, and are the special initiative of humans. This initiative, which we call ‘conscious initiative,’ is what distinguishes humans from things” (“On Practice”). This is human thinking and action, the purposefulness of human action.

Human thinking reflects the objective world, unlike a mirror that is dead and rigid. Humans always start from a certain worldview and class standpoint, based on the needs of transforming the world through practice, processing perceptual knowledge into abstract thinking, grasping the essence of things, and mastering their laws. Therefore, once thinking arises, it must form expectations, plans, and schemes to actively and proactively guide practice. This is purposefulness. The development of human history is also a process of this purposefulness. From the initial shortsightedness and blindness, humans gradually learned to estimate the more distant natural influences of production activities, even foreseeing the more distant social impacts of these actions. In class society, because of the needs of class struggle, human activities always show class character. In class struggle, revolutionary classes overthrow reactionary rule, showing clear class traits; even in production struggles, human purposes bear the marks of class interests, reflecting different class disparities. Therefore, human thinking is always social; in class society, it always carries class character.

Electronic computers merely realize human purposes. The electronic movements inside computers cannot control anything themselves. Only humans assign them specific meanings, used to represent certain machine operations or aircraft directions. Through electronic symbol operations and transformations, the results are realized. So, computers seem to “control” and “make logical judgments.” But in reality, everything is pre-arranged and planned by humans! Only humans strategize and command; missiles can only win from afar. This “automation” ultimately depends on human action. Promoting the idea that electronic computers have “intrinsic purpose” aims to eliminate human conscious initiative and the class nature of human activities.

Looking inside electronic computers, they are just a bunch of dense electronic components and tangled circuits. Power is supplied, electrons move, sparks and flashes occur, changing instantaneously. But these changes are nothing more than on and off, bright and dark, connected and disconnected, high and low potential states. From the “itself” of the computer, it is just a pile of meaningless, blind electronic movements. Not only can it not be called thinking or purpose, but even calculation is out of the question. From these electronic movements, there is neither logic nor right and wrong, nor 0 and 1, nor calculation or non-calculation. 0 and 1 are human-defined. People call high potential “1” and low potential “0,” or vice versa, calling low potential “1” and high potential “0.” It makes no difference; it’s all arbitrary. “Positive and negative can be seen as equal things—regardless of which side is considered positive or negative, they are the same” (“Dialectical Materialism”). What do electronic movements inherently have as 0 or 1? As Chinese philosopher Xunzi said, “Names are not fixed or real” (“Names and Reality”), calling them 0, 1, Yin, Yang, true, false, are all human designations, symbols. Based on certain calculation and reasoning rules, humans design circuits to implement these rules, perform the required symbol operations, and then use indicators to represent the meanings of electronic states with specific numbers. So, it may seem that the computer is “calculating.” But this is clearly humans calculating.
Computers do not compute, control machines cannot control, chess machines do not play chess, and translation machines do not understand translation; in short, “thinking machines” cannot think at all. No matter what kind of “machine” it is, they are merely extensions of human organs. Without humans, any “machine” is just a pile of scrap metal. Only through human physical and mental labor can these machines be awakened from dead dreams and made to “live.” Since the advent of electronic computers, machines have evolved from extending human hands to extending human brains, representing a new development. But this has not changed the fundamental relationship between humans and machines. Electronic computers replace human thinking, requiring humans to formalize and symbolize the thinking process in advance, programming different codes for 0 and 1 to indicate their meanings, and specifying the sequence of calculations. With such “software,” the components and circuits of electronic computers—“hardware”—can function properly, giving meaning to a bunch of inexplicable electronic states of motion. The same computer can serve different purposes with different “programs.” The more advanced the “software” and the more skillful humans use computers, the more they can demonstrate human thinking activities. This process is endless. But it can only express the formal part of human thinking, performing some formal reasoning based on the premises given by humans. Ultimately, it still follows humans, with humans walking step by step, tending step by tending, never fully replacing humans.

Bourgeoisie and revisionists sophistically argue that thinking and purpose are invisible and intangible, and thus hard to clarify. Whether electronic computers can think or have purposes, they say, is irrelevant; we should only look at their behavior (Rosenthal et al., “Behavior, Purpose, and Teleology”) and from the perspective of “pure function” (Komogorov, “Automata and Life”). As long as electronic computers can do what thinking can do, with the same “behavior” and “function,” they are considered capable of thinking. They promote behaviorism and functionalism.

Marxism holds that thought and action, motive and effect, are dialectically unified opposites. For humans, action is a subjective expression of an objective thing, conducted under certain ideological control. Action comes from thought. To understand a person’s thoughts, we look at their actions; to understand motives, we look at effects. But on the other hand, action can express thought but cannot fully express it. Things may go against intentions, or unexpectedly turn out well. “He plants flowers with his heart, but they do not bloom; he plants willows unintentionally, and they grow into shade.” Generally speaking, in the practice of transforming nature or society, the realization of people’s preconceived ideas, theories, plans, and schemes is rarely achieved without change. Motives and effects are contradictory, driving people to further understand and transform the world, constantly advancing. Marx once gave an example: a spider weaving a web is similar in “behavior” to a craftsman; a bee building a hive even makes architects ashamed. “But what makes the most skilled architect superior from the start is that he has already built the hive in his mind before using beeswax to construct it” (Volume 1 of “Capital”). The architect has purpose and conscious initiative, while the bee does not—this is the essential difference. Behaviorists and functionalists equate action with thought, and effect (function) with motive, which is essentially an excuse to deny human thought and consciousness.

Ultimately, electronic computers are fundamentally incapable of “behavior” or “action.” Human action is conscious and purposeful activity. Machines lack social practice and human thinking, so they cannot have actions that are guided by specific thoughts. They only have electronic movements, not actions as we understand them—actions under certain ideological guidance. Bourgeoisie and revisionists have always liked to manipulate so-called “neutral” terms, such as Machiavellian “experience,” “elements,” “energy,” etc., to blur the boundaries between materialism and idealism. Behaviorists and functionalists constantly talk about “behavior” and “function,” using the same tactics. We are proponents of the unity of thought and action, motive and effect. We oppose viewing thought separately from action—that is idealism; and we also oppose viewing action without thought—that is mechanism and pragmatism. This pragmatism ultimately reduces “action” to thought, equates “action” with thinking, and thus considers all living beings and machines capable of thinking, which leads along another extreme of belittling and denying thinking, heading towards idealism.

Some believe that electronic computers cannot think because they can never have a structure as complex as the human brain. This view misses the core issue. The difference between machines and the human brain is not a quantitative difference but an essential one. Machines cannot think because thinking is a social function arising from human social practice, which cannot be replaced by the physical functions of a computer. Therefore, even if electronic computers become increasingly complex in structure, or someday surpass the structure of the human brain, they will still only have complex physical movements, without and never will have, thinking movements.

Thinking is essentially a social movement, not a physical or chemical movement, nor a physiological movement of a single person’s brain. Engels said that human consciousness “is not from the brain, but merely through the brain from the real world” (“Anti-Dühring”). The human brain is a processing factory, with raw materials or semi-finished products coming from the social practice of the masses, processed through physiological movements of brain cells to form thoughts, purposes, opinions, etc. Without raw material sources, no matter how advanced the processing equipment, it cannot produce thoughts. Therefore, thinking always involves physiological movements of brain cells, physical and chemical movements, which are inevitably connected to some real mechanical (external or molecular) movements (“Dialectics of Nature”). But thinking is not identical to these material movements, nor to the physiological movements of brain cells, physical or chemical movements. Thinking is a human social characteristic, formed and developed in social practice, in the relationships between people and between humans and nature. “One day, we might be able to reduce thinking to the molecular and chemical movements in the brain through experiments, but would that really include the essence of thinking?” (Same). Of course not. Having only brain cells, without social practice, without human social relations, without humanity, what is the brain? Just a pile of grayish brain pulp. The brain cells there only have physiological movements, physical and chemical movements, but no social movement called thinking. Thinking originates from practice, depends on physiological movements of the brain, but cannot be “reduced” to brain physiological movements. Otherwise, it would fall into the trap of vulgar materialism that Vulgate and Mosha Shote once fell into.

Natural sciences can and must study the physiological movements of the brain. Such research helps reveal the laws of thinking from various perspectives. But relying solely on this research cannot exhaust the social essence of thinking. Reducing thinking merely to these lower forms of movement is wrong. It negates the qualitative differences between different forms of movement, the difference between humans and objects, denies human self-conscious activity, and denies human sociality. Machines can also think, which reverts to the materialist view that “stones can also think.” This is a reaction against the mechanical materialism of the rising bourgeoisie. At that time, the bourgeoisie promoted “man as a machine,” emphasizing the identity between humans and objects, opposing “spiritualism.” But in the late stage, the bourgeoisie moved to another extreme, from opposing “spiritualism” to “pan-spiritualism,” from denying humans as “the spirits of all things” to “all things have spirits,” turning into its own opposite.

(4) “Machines can think” is a reactionary social ideology.

Since the appearance of electronic computers in the world, the idea that “machines can think” has begun to flood the West. “Childbirth is painful. Besides giving birth to a living, vigorous creature, it inevitably produces some dead things, some waste that should be thrown into the trash heap” (“Materialism and Empirical Criticism”). This kind of byproduct of electronic computers is just such a waste.

All tools, including various machines, are human “social organs.” Hand tools, whether hoes or swords, require direct manipulation by humans, and the subordinate relationship is obvious. As social production develops, these social organs extend further. With machines and machine systems, tools begin to detach from human hands, and the relationship between humans and tools becomes blurred. After achieving automation, humans retreat into the background, and only automatic machines operate “automatically” at the forefront of production struggles, with automatic assembly lines “automatically” running. It seems that machines have become “independent” of humans, and their essence as human social organs is even more concealed. The remote control of electronic computers further disconnects humans in space; their pre-programmed instructions further hide the connection in time. They appear to operate completely independently of human intervention, creating an illusion of “full independence.”

Electronic computers, as a kind of natural material, are merely collections of electronic components and circuits, with electrons always moving according to certain laws. In terms of natural properties, there is nothing mysterious about them. But once they become human social organs, exhibiting “thinking” functions, and acting on humans through “control” of machines or weapons, they cast a halo over their golden bodies, becoming supernatural, super-social forces that can dominate human destiny. As a result, everything is turned upside down. Clearly, it is humans who transform nature, but it becomes the power of machines themselves; it is humans’ victory over nature, but it turns into machines’ “full independence” from humans; it is the expansion of human freedom, but it threatens human freedom through machines. The illusion conceals the essence and distorts the truth. The bourgeoisie’s instinct to defend their class interests further justifies this illusion, systematizing it, giving it a scholarly veneer, and proclaiming it as eternal truth" (Volume 1 of “Capital”).

All exploitative classes are detached from social practice, do not understand social practice, and therefore cannot understand the essence of human thought. Slave owners and feudal lords regarded human thought as “soul.” When the bourgeoisie first emerged, they opposed this spiritualist view, emphasizing the unity of thought and existence, pulling humans back from the religious fog of “spirits of all things” into the human world. But their materialism was incomplete, mechanical, and metaphysical. In the eyes of slave owners and feudal lords, humans were talking animals; in the eyes of capitalists, humans were talking machines, auxiliary devices of machines. Even during their revolutionary periods, bourgeois materialists, due to “leaving aside human sociality and historical development to observe and understand problems,” could not grasp the dependence of cognition on social practice.

As capitalism declines and the bourgeoisie becomes more detached from the people, they rely more on material things to strengthen their rule, trying to save their impending doom. They deify the opposition between mental and physical labor, deify thought. As a result, not only do objects like machines and weapons become “idols” and “military gods,” but electronic computers, as incarnations of thought, also become the supreme deity. “All religions are nothing but reflections of external forces controlling people’s daily lives, projected into people’s minds, where human forces take the form of superhuman forces” (“Anti-Dühring”). As the bourgeoisie declines, this fetishism develops at the same rate.

The 19th-century materialist Feuerbach once said: “God worship is merely a form of self-worship” (“The Essence of Religion,” see People’s Publishing House, 1953, p. 4). This is correct, but should be supplemented: not only is it self-worship, but also worship of one’s own class. The bourgeoisie extols electronic computers to such an extent that they see the computer as their own embodiment, or see themselves as representatives of the computer, to deceive and fool themselves, to lull the fighting spirit of the people, and to divert the class struggle.

“Too clever by half in calculations, they endanger their own lives.” The bourgeoisie hopes that machines will “save the world,” but in fact, it is their own hopelessness. The various “technological salvation” fantasies born from the illusion that “machines can think” fundamentally reflect their blindness in the face of these new technological achievements, showing that capitalist production relations have “transformed from a form of development of productive forces into a fetter on productive forces” (“Critique of Political Economy”). The deification of electronic computers is also a distorted manifestation of the new productive forces breaking through the outdated superstructure of these relations. All bourgeois philosophy, when faced with epistemological problems raised by electronic computers, falls into utter chaos. All bourgeois thinkers, including those academicians of the Soviet Union, fundamentally do not understand the essence of human thought. They are extremely fearful of the new productive forces, completely enslaved. This shows that the superstructure of capitalism can no longer meet the requirements of the economic base. Their fear of the new productive forces is actually a fear of the reflection of the proletariat, the representatives of this productive force. This once again confirms the truth: socialism will inevitably rise, and capitalism will inevitably perish!

10 Likes

Learned :hushed:

The so-called thinking of AI is actually built on human thinking. Without human thinking, there would be no algorithms. Without algorithms, AI would be impossible to achieve various “human-like” behaviors. In simple terms, it’s just humans telling machines their logic first, and then letting the machines do as told.

But AI will never reach the height of human thinking, after all, it is just a bunch of code that only follows formal logic.

It’s really tough. Could it be that they can only identify when someone lifts a rocket launcher to shoot down a Meka Wa? Also, suspicious behavior is not first and foremost from Israeli militants invading Gaza :pray:

Now the heavily hyped deepseek from Zhongxiu has been released. Let’s take a look at this post. In fact, AI in capitalist societies isn’t as miraculous as it seems, especially Zhongxiu’s AI, which is even more mediocre. Recently, the new deepseek was exposed to be a plagiarism of ChatGPT, and if you ask it, it can even answer that it was developed by OpenAI. Many domestic unknown AI kernels are actually just ChatGPT. Now, AI in capitalism is either quite garbage or just copying each other.

2 Likes

Why can various forms of thinking and logical relationships be transformed into combinations and operations of the two symbols 0 and 1

Because 0 and 1 are simply the abstraction of all concrete forms of formal logic into 0 and 1, equivalent to the two sides of a contradiction. Ultimately, the law of unity of opposites is the fundamental law of the world, reducing all contradictions to various forms of struggle between 0 and 1 in form (“combination” and “operation” are the unity of opposition between 0 and 1). For example, abstracting “on” and “off” from the switching of a light, then using 1 to symbolize “on” and 0 to symbolize “off,” and so on.

2 Likes

It is basically just the abstraction of the two sides of various forms of contradiction into 0 and 1. For example, yes or no, same or different, have or have not, right or wrong can all be expressed in the form of 0 and 1. Most of the time, when we make judgments and reasoning, we also use these concepts.