Chinese room argument - davidar/scholarpedia GitHub Wiki
The Chinese Room Argument aims to refute a certain conception of the role of computation in human cognition. In order to understand the argument, it is necessary to see the distinction between Strong and Weak versions of Artificial Intelligence. According to Strong Artificial Intelligence, any system that implements the right computer program with the right inputs and outputs thereby has cognition in exactly the same literal sense that human beings have understanding, thought, memory, etc. The implemented computer program is sufficient for, because constitutive of, human cognition. Weak or Cautious Artificial Intelligence claims only that the computer is a useful tool in studying human cognition, as it is a useful tool in studying many scientific domains. Computer programs which simulate cognition will help us to understand cognition in the same way that computer programs which simulate biological processes or economic processes will help us understand those processes. The contrast is that according to Strong AI, the correct simulation really is a mind. According to Weak AI, the correct simulation is a model of the mind.
Strong AI is answered by a simple thought experiment. If computation were sufficient for cognition, then any agent lacking a cognitive capacity could acquire that capacity simply by implementing the appropriate computer program for manifesting that capacity. Imagine a native speaker of English, me for example, who understands no Chinese. Imagine that I am locked in a room with boxes of Chinese symbols (the database) together with a book of instructions in English for manipulating the symbols (the program). Imagine that people outside the room send in small batches of Chinese symbols (questions) and these form the input. I know is that I am receiving sets of symbols which to me are meaningless. Imagine that I follow the program which instructs me how to manipulate the symbols. Imagine that the programmers who design the program are so good at writing the program, and I get so good at manipulating the Chinese symbols, that I am able to give correct answers to the questions (the output). The program makes it possible for me, in the room, to pass the Turing Test for understanding Chinese, but all the same I do not understand a single word of Chinese. The point of the argument is that if I do not understand Chinese on the basis of implementing the appropriate program for understanding Chinese, then neither does any other digital computer solely on that basis because the computer, qua computer, has nothing that I do not have.
The argument proceeds by a thought experiment, but the thought experiment is underlain by a deductive proof. And the thought experiment illustrates a crucial premise in the proof. The proof contains three premises and a conclusion.
- Premise 1: Implemented programs are syntactical processes.
The claim that implemented programs are syntactical processes, is not like the claim that men are mortal. The essence of the program is identified by its syntactical features. They are not just incidental feature. It is like saying triangles are three sided plane figures. There is nothing to the program qua program but its syntactical properties. Triangles may be pink or blue, but that has nothing to do with triangularity; analogously, programs may be in electronic circuits or Chinese rooms, but that has nothing to do with the nature of the program.
- Premise 2: Minds have semantic contents.
- Premise 3: Syntax by itself is neither sufficient for nor constitutive of semantics.
- Conclusion: Therefore, the implemented programs are not by themselves constitutive of, nor sufficient for, minds. In short, Strong Artificial Intelligence is false.
The Chinese Room Argument thus rests on two simple but basic principles, each of which can be stated in four words.
- First: Syntax is not semantics.
- Second: Simulation is not duplication.
There have been a rather large number of discussions and objections to the Chinese Room, but none have shaken its fundamental insight as described above. Perhaps the most common attack is what I baptized as the Systems Reply. The claim of the Systems Reply is that though the man in the room does not understand Chinese, he is not the whole of the system, he is simply a cog in the system, like a single neuron in a human brain (this example of a single neuron was used by Herbert Simon in an attack he made on the Chinese Room Argument in a public lecture at the University of California, Berkeley). He is the central processing unit of a computational system, but Strong AI does not claim that the CPU by itself would be able to understand. It is the whole system that understands. The Systems Reply can be answered as follows. Suppose one asks, Why is it that the man does not understand, even though he is running the program that Strong AI grants is sufficient for understanding Chinese? The answer is that the man has no way to get from the syntax to the semantics. But in exactly the same way, the whole system, the whole room in which the man is located, has no way to pass from the syntax of the implemented program to the actual semantics (or intentional content or meaning) of the Chinese symbols. The man has no way to understand the meanings of the Chinese symbols from the operations of the system, but neither does the whole system. In the original presentation of the Chinese Room Argument, I illustrated this by imagining that I get rid of the room and work outdoors by memorizing the database, the program, etc., and doing all the computations in my head. The principle that the syntax is not sufficient for the semantics applies both to the man and to the whole system.
The Chinese Room Argument is sometimes misinterpreted. Three of the most common misunderstandings are the following. First, it is sometimes said that the argument is supposed to show that computers can’t think. That is not the point of the argument at all. If a computer is defined as anything that can carry out computations, then every normal human being is a computer, and consequently, a rather large number of computers can think, namely every normal human. The point is not that computers cannot think. The point is rather that computation as standardly defined in terms of the manipulation of formal symbols is not by itself constitutive of, nor sufficient for, thinking.
A second misunderstanding is that the Chinese Room Argument is supposed to show that machines cannot think. Once again, this is a misunderstanding. The brain is a machine. If a machine is defined as a physical system capable of performing certain functions, then there is no question that the brain is a machine. And since brains can think, it follows immediately that some machines can think.
A third misunderstanding is that that the Chinese Room Argument is supposed to show that it is impossible to build a thinking machine. But this is not claimed by the Chinese Room Argument. On the contrary, we know that thinking is caused by neurobiological processes in the brain, and since the brain is a machine, there is no obstacle in principle to building a machine capable of thinking. Furthermore, it may be possible to build a thinking machine out of substances unlike human neurons. At any rate, we have no theoretical argument against that possibility. What the Chinese Room Argument shows is that this project cannot succeed solely by building a machine that implements a certain sort of computer program. One can no more create consciousness and thought by running a computer simulation of consciousness and thought, than one can build a flying machine simply by building a computer that can simulate flight. Computer simulations of thought are no more actually thinking than computer simulations of flight are actually flying or computer simulations of rainstorms are actually raining. The brain is above all a causal mechanism and anything that thinks must be able to duplicate and not merely simulate the causal powers of the causal mechanism. The mere manipulation of formal symbols is not sufficient for this.
The Chinese Room Argument had an unusual beginning and an even more unusual history. In the late 1970s, Cognitive Science was in its infancy and early efforts were often funded by the Sloan Foundation. Lecturers were invited to universities other than their own to lecture on foundational issues in cognitive science, and I went from Berkeley to give such lectures at Yale. In the terminology of the time we were called Sloan Rangers. I was invited to lecture at the Yale Artificial Intelligence Lab, and as I knew nothing about Artificial Intelligence, I brought a book by the leaders of the Yale group, in which they purported to explain story understanding. The idea was that they could program a computer that could answer questions about a story even though the answers to the questions were not made explicit in the story. Did they think the story understanding program was sufficient for genuine understanding? It seemed to me obvious that it was in no way sufficient for story understanding, because using the programs that they designed, I could easily imagine myself answering questions about stories in Chinese without understanding any Chinese. Their story understanding program manipulated symbols according to rules but it had no understanding. It had a syntax but not a semantics. These ideas came to me at 30,000 feet between cocktails and dinner on United Airlines on my flight East to lecture in New Haven. I knew nothing of Artificial Intelligence and because my argument seemed so obvious to me, I assumed that it was probably a familiar argument and that the Yale group must have an answer to it. But when I got to Yale I was amazed to discover that they were surprised by the argument. Everybody agreed that the argument was wrong but they did not seem to agree on exactly why it was wrong. And indeed most of the subsequent objections to the Chinese Room I heard, in early forms, in those days I spent lecturing at Yale. The article was subsequently published in Behavioral and Brain Sciences for 1980, and provoked twenty-seven simultaneously published responses, almost all of which were hostile to the argument and some were downright rude. I have since published the argument in other places, including my 1984 Reith Lectures book Minds, Brains and Science, The Scientific American and The New York Review of Books.
I never really had any doubts about the argument as it seems obvious that the syntax of the implemented program is not the same as the semantics of actual language understanding. And only someone with a commitment to an ideology that says that the brain must be a digital computer ("What else could it be?") could still be convinced of Strong AI in the face of this argument. As I was invited by various universities to lecture on this issue I discovered that the answers to it tended to fall into certain patterns, which I named respectively as the Systems Reply, the Robot Reply (if we put the computer inside a robot it would acquire understanding because of the robot’s causal interaction with the world), the Wait ’Til Next Year Reply (better technology in the future will enable digital computers to understand), the Brain Simulator Reply (if we did a computer simulation of every neuron in the Chinese brain, then the computer would have to understand Chinese), etc. I had no trouble answering these and other objections. I assumed that the orthodox Strong AI people would fasten on to the Robot Reply because it seems to exemplify the behaviorism that was implicit in the whole project of assuming that the Turing Test was a conclusive proof of human cognition. But to my surprise the mainstream adopted the Systems Reply which is, I think, obviously inadequate for reasons I state in this essay. The Chinese Room Argument has had a remarkable history since its original publication. The original article was published in at least twenty-four collections and translated into seven languages. Subsequent statements of the relevant argument in Minds, Brains and Science were also reprinted in several collections and the whole book was translated into twelve languages. I have lost count of the publication, reprinting and translations of other statements. Two decades after the original publication of the article a book appeared edited by John Preston and Mark Bishop called Views into the Chinese Room. A web site listed below currently cites 137 discussions (I assume mostly attacks) on the argument.
- Preston, John and Mark Bishop (eds.). Views into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford/New York: Oxford University Press, 2002.
- Searle, John R. Minds, Brains, and Programs, The Behavioral and Brain Sciences, Vol. 3, 1980
- Searle, John R. Minds, Brains and Science. London: BBC Publications, 1984; Penguin Books, 1989. Cambridge, MA: Harvard University Press, 1984.
- Searle, John R. Is the Brain's Mind a Computer Program?, The Scientific American, January 1990.
- Searle, John R. The Myth of the Computer, The New York Review of Books. April 29, 1982.
- Dietrich, Eric (ed.). Thinking Computers and Virtual Persons. San Diego: Academic Press, 1994.
- Preston, John and Mark Bishop (eds.). Views into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford/New York: Oxford University Press, 2002.
- Searle, John R. Minds, Brains, and Programs, The Behavioral and Brain Sciences, Vol. 3, 1980
- Searle, John R. Minds, Brains and Science. London: BBC Publications, 1984; Penguin Books, 1989. Cambridge, MA: Harvard University Press, 1984.
- Searle, John R. Is the Brain's Mind a Computer Program?, The Scientific American, January 1990.
- Searle, John R. The Myth of the Computer, The New York Review of Books. April 29, 1982.
Artificial Intelligence, Brain