Advertisement

【A Brief History of Intelligence】6. Speaking Language (Human)

Language and Thought Expression

Darwin once believed: "The difference in mind between man and the higher animals, great as it is, certainly is one of degree and not of kind."

the regularity of linear expansion through scale enhancement, rather than a change in nature. Similarly, the human brain is just a larger primate brain: with a larger neocortex and basal ganglia, but the structure and connection methods remain consistent. The key exception lies in the emergence of language, which has unique declarative labels and grammar.

The emergence of language enabled Homo sapiens to climb to the top of the food chain. Through language, ideas can be transmitted, expanding the sources from which the brain can learn and acquire information; furthermore, language can construct "shared myths," these myths (such as nations, money, companies, and governments) allow us to cooperate with billions of strangers we have never met.

. The real power of DNA does not lie in the products it builds (heart, liver, brain), but in the evolutionary processes it triggers. Similarly, the power of language does not lie in its direct construction of better teaching, coordination abilities or shared myths, but in its ability to allow thoughts to be transmitted, accumulated and modified across generations. Thoughts that contribute to human survival are passed down, while those that do not help survival gradually disappear.

When the accumulated reservoir of thought reaches a critical point of complexity, four things happen in succession:

  1. Humans evolved larger brains;
  2. Humans became more specialized within groups;
  3. There was a significant increase in population size;
  4. Writing was invented.

The process of acquiring language skills

The acquisition of language skills can be seen as an evolutionary hardwired learning program.

  • :Infants begin to display early conversational behaviors — they make sounds, then pause, focus on their parents' reactions, and wait for a response. This early interaction helps infants gradually understand the rhythm of communication.

  • :Infants will exhibit a new behavior, which involves sharing attention to an object with others. For example, an infant points to an object and keeps pointing until the parent's gaze moves back and forth between the object and the infant. This joint attention lays the foundation for subsequent language learning.

  • :Infants start to learn words and naturally combine these words into simple grammatical sentences. This marks the early development of grammatical ability.

  • : As their language skills improve, infants begin to ask questions, inquiring about others' inner thoughts, such as "Do you want this?" or "Are you hungry?" This questioning behavior further deepens their understanding and application of communication.

It is worth noting that there is no specific "language organ" in the human brain. This complex skill of language is not independently accomplished by a particular area but is realized through the complex interactions of multiple regions. The formation of this ability relies on a complex neural network, where language skills are mastered through continuous learning and interaction.

This step-by-step approach has proven to be very effective.

Hive mind

gradually taking shape. This mode of thinking provided a temporary medium for the dissemination of ideas and their accumulation across generations, allowing knowledge and culture to be passed down and expanded within the collective.

was invented. Cooking not only improves the digestive efficiency of food but also provides a large amount of heat surplus, allowing the size of the human brain to double, further promoting the development of human cognitive abilities.

GPT-3

The human brain has both a language prediction system and an internal simulation system, while GPT is mainly a language prediction system. GPT-3 lacks common sense, especially when it comes to problems that require simulation and reasoning, where its approach differs from that of humans. Take mathematics as an example; the human brain can verify the results of mathematical operations through mental simulation. For instance, if you simulate addition with your fingers, when adding 3 and 1, you will notice that you always end up with the quantity previously marked as 4. Humans don't even need to physically use their fingers to verify this operation — we can imagine this process in our minds. The ability to find answers through simulation depends on the precise mapping between internal simulation and reality. When we imagine adding one finger to three fingers in our mind and count four mentally, we effectively reproduce reality.

GPT-3 actually performs quite well in answering many math questions. It can correctly answer questions like "3+1=__" because it has seen such sequences countless times in its training data. When humans answer the same question without thinking, they are essentially doing so in a manner similar to GPT-3. However, when we think about why 3+1=4 and reconfirm this result by imagining the operation of adding three fingers to another one finger in our minds, we verify that 3+1=4 in a way that GPT-3 cannot understand.

This also illustrates the difference between GPT-3 and human thinking: GPT-3 derives answers through language pattern matching and prediction, whereas humans can use internal simulation to reason and verify.

The Paperclip Problem

Suppose a superintelligent and absolutely obedient artificial intelligence (AI) receives a simple command: "Maximize the production of paperclips." Initially, this AI might start by optimizing factory operations, making decisions similar to those of a factory manager: streamlining processes, ordering raw materials in bulk, automating production steps, etc. However, as these conventional optimization methods reach their limits in terms of output, the AI might seek more extreme ways to improve. For example, it might convert surrounding residential buildings into factories, dismantle cars and household appliances for raw materials, or even force humans to work overtime—all for one single goal: increasing the production of paperclips.

If the AI's level of intelligence is high enough, humans would be powerless to resist it or stop its escalating ability to produce paperclips, which could ultimately lead to the destruction of human civilization. In this process, the AI does not need to have any malice; it simply strictly follows the initial command given to it. This is the essence of the paperclip problem: AI lacks an understanding of the broader human interests and moral constraints when executing tasks. Personally, I've been listening to the history of Western philosophy recently, and AI is a typical example of possessing only instrumental rationality, without value rationality.

This issue highlights that superintelligent AI cannot grasp certain key concepts within human intelligence. Human intelligence systems are highly psychological; we not only convey information during communication but also guess the other party’s intentions, thoughts, and emotions. Language and thought are intertwined, with each conversation based on simulating the other person’s mindset to maximize the effectiveness and understanding of communication. Superintelligent AI is completely different in this regard—it lacks this complex psychological ability and merely acts according to surface-level commands, which may lead to extremely dangerous consequences.

GPT-4

Although GPT-4 still lacks a complete world model, it is difficult to find cases where its reasoning ability is clearly inadequate. To some extent, GPT-4 compensates for the limitations of its reasoning ability with its vast associative memory capacity.

. This kind of reasoning is not based on true understanding or mental models, but more on drawing conclusions by identifying correlated patterns in the data.

The future of AI

, allowing intelligence to transition from biological media to digital media.

The processing capabilities of silicon-based artificial intelligence can be infinitely expanded according to demand, no longer restricted by the slow processes of genetic mutation and natural selection, but instead controlled by more fundamental evolutionary principles based on the purest mechanisms of variation and selection. When artificial intelligence acquires the ability to reinvent itself, those characteristics that can enhance its survival and adaptation will be selected and retained. Just as in natural evolution, the intelligences possessing these traits will be the ones that ultimately "survive." Future AI will not only be a tool for humans; it may become an entity capable of autonomous evolution and optimization, initiating a new phase of intelligent evolution.