yes, therapy helps!
The Chinese room experiment: computers with minds?

The Chinese room experiment: computers with minds?

March 3, 2024

The mental experiment of the Chinese room is a hypothetical situation posed by the American philosopher John Searle, to demonstrate that the ability to orderly manipulate a set of symbols does not necessarily imply that there is an understanding or a linguistic understanding of those symbols. That is, the ability to understand does not arise from the syntax, with which, the computational paradigm developed by the cognitive sciences to understand the functioning of the human mind is questioned.

In this article we will see exactly what this thought experiment consists of and what kind of philosophical debates it has generated.

  • Related article: "How are Psychology and Philosophy alike?"

The Turing machine and the computational paradigm

The development of artificial intelligence is one of the great attempts of the 20th century for understand and even replicate the human mind through the use of computer programs . In this context, one of the most popular models has been the Turing machine.


Alan Turing (1912-1954) wanted to show that a programmed machine can hold conversations like a human being. For this, he proposed a hypothetical situation based on imitation: if we program a machine to imitate the linguistic capacity of the speakers, then we put it before a set of judges, and achieve that 30% of these judges think they are talking to a real person, this would be sufficient evidence to show that a machine can be programmed in such a way as to replicate the mental states of human beings; and vice versa, this would also be an explanatory model of how human mental states work.

From the computational paradigm, a part of the cognitive current suggests that the most efficient way to acquire knowledge about the world is through the increasingly refined reproduction of information processing rules , so that, independently of the subjectivity or the history of each one, we could function and respond in society. Thus, the mind would be an exact copy of reality, it is the place of knowledge par excellence and the tool to represent the outside world.


After the Turing machine even some computer systems were programmed that tried to pass the test . One of the first was ELIZA, designed by Joseph Weizenbaum, who responded to users by means of a model previously registered in a database, which made some interlocutors believe they were talking to a person.

Among the most recent inventions that are similar to the Turing machine we find, for example, the CAPTCHA to detect Spam, or SIRI of the iOS operating system. But, just as there have been those who try to prove that Turing was right, there have also been those who question it.

  • You may be interested: "The Molyneux Problem: a curious mental experiment"

The Chinese room: does the mind work like a computer?

From the experiments that sought to approve the Turing test, John Searle distinguishes between Weak Artificial Intelligence (the one that simulates the understanding, without but intentional states, that is, it describes the mind but does not equal it); and Strong Artificial Intelligence (when the machine has mental states like those of human beings, for example, if it can understand stories as a person does).


It is impossible for Searle to create Strong Articial Intelligence , what he wanted to prove by means of a mental experiment known as the Chinese room or the Chinese piece. This experiment consists of posing a hypothetical situation that is as follows: a native speaker of English, who does not know Chinese, is locked in a room and must answer questions about a story that has been told in Chinese.

How do you respond? Through a book of rules written in English that serve to syntactically arrange the Chinese symbols without explaining its meaning, only explaining how they should be used. Through this exercise, the questions are answered properly by the person inside the room, even when this person has not understood its content.

Now, suppose there is an external observer, what do you see? That the person who is inside the room behaves exactly like a person who does understand Chinese.

For Searle, this shows that a computer program can mimic a human mind, but this does not mean that the computer program is the same as a human mind, because It has no semantic capacity or intentionality .

Impact on the understanding of the human mind

Taken to the field of humans, the foregoing means that the process by which we develop the ability to understand a language goes beyond having a set of symbols; other elements that computer programs can not have are necessary.

Not only that, but from this experiment studies have been expanded on how meaning is constructed , and where that meaning is. The proposals are very diverse, ranging from cognitivist perspectives that say that it is in the head of each person, derived from a set of mental states or that are given in an innate way, to more constructionist perspectives that ask how social systems are constructed and practices that are historical and that give a social meaning (that a term has a meaning not because it is in people's heads, but because it enters a set of practical language rules).

Criticisms to the mental experiment of the Chinese room

Some researchers who do not agree with Searle think that the experiment is invalid because, even if the person inside the room does not understand Chinese, it may be that, in conjunction with the elements that surround him (the same room, the real estate, the rules manual), there is an understanding of Chinese.

Given this, Searle responds with a new hypothetical situation: even if we disappear the elements that surround the person who is inside the room, and we ask him to memorize the rules manuals to manipulate the Chinese symbols, this person would not be understanding Chinese, which does not make a computational processor either.

The answer to this same criticism has been that the Chinese room is a technically impossible experiment. In turn, the answer to this has been that what is technically impossible does not mean that it is logically impossible .

Another of the most popular criticisms has been the one carried out by Dennett and Hofstadter, which apply not only to the Searle experiment but also to the set of mental experiments that have been developed in recent centuries, since the reliability is doubtful because they do not have an empirical reality rigorous, but speculative and close to common sense, with which, they are first of all a "bomb of intuitions".

Bibliographic references:

  • González, R. (2012). The Chinese Piece: a mental experiment with Cartesian bias ?. Chilean Journal of Neuropsychology, 7 (1): 1-6.
  • Sandoval, J. (2004). Representation, discursivity and situated action. Critical introduction to the social psychology of knowledge. University of Valparaíso: Chile.
  • González, R. (S / A). "Pumps of intuitions", mind, materialism and dualism: Verification, refutation or epoché ?. Repository of the University of Chile. [Online]. Accessed April 20, 2018. Available at //repositorio.uchile.cl/bitstream/handle/2250/143628/Bombas%20de%20intuiciones.pdf?sequence=1.

The Chinese Room Experiment - The Hunt for AI - BBC (March 2024).


Similar Articles