While humans and machines both engage in mere generation, a popular school of philosophical thought holds that computers can never be capable of more than mere generation. Some use Gödelian arguments, while others cite obscure aspects of quantum theory (and some, such as the physicist Roger Penrose, combine both in a ruthless, if not entirely convincing, pincer attack). Regardless of the theoretical specifics, many simply assert that when it comes to replicating the qualities of the human brain, computers are just not made of the right stuff.
Perhaps the most famous proponent of this view is the philosopher John Searle, who invented a fiendish thought experiment to marshal our intuitions against the enterprise of full-blown (or strong) AI. His experiment, evocatively named the Chinese Room, asks us to imagine a scenario that is best described as Artificial Intelligence meets the Manchurian Candidate. A man wakes up in the locked room of the title, surrounded by books of rules for manipulating inscrutable symbols. It’s tempting to think of this poor wretch as a P.O.W. in the Korean war, but any non-speaker of Chinese will do, since Searle’s symbols are the characters of the Chinese writing system, which Searle tells us appear as unintelligible “squiggles” to the bemused subject. The man’s only interaction with the outside world is via a slot, through which slips of paper bearing Chinese characters (and, one hopes, the occasional meal) are passed. The man knows that his continued survival requires him to precisely apply the rules to the inputs he receives, to manipulate the symbols accordingly and thereby generate a new sequence of symbols, on a new slip of paper to be passed back through the slot to the outside world. Unbeknownst to the man, the incoming slips contain questions written in Chinese, and the rules of the books encode a manually-executable algorithm for answering these questions. The rules are intricate and cleverly constructed, so that the slips of paper that are passed back through the slot contain answers that seem convincing to the person waiting outside.
The Chinese room provides an oracular service of sorts, and to the people who use it, the room may well conceal a learned Chinese scholar. Whoever or whatever is hidden in the room, it looks to these users as though it truly understands Chinese. However, we are privy to the room’s secrets, and know that the wretched man who is trapped inside has not one word of Chinese. Not only does he not understand the requests that come in via the slot, neither does he understand the well-formed Chinese replies that he sends back out again. The man sings for his supper, but he does not understand his song or its words. Now, Searle’s room has a curious resemblance to Schrödinger’s infamous box. Each is sealed from prying eyes, which creates an air of magical realism. In the box we find a cat who may be dead or alive (the cat has a 50/50 chance of being killed by a toxic gas); in the room we find a man who may or may not understand Chinese. Schrödinger concluded, somewhat counter-intuitively, that his cat must be simultaneously dead and alive, while Searle appeals to our everyday intuitions regarding language to conclude that his man truly does not understand Chinese.
The man certainly does not understand Chinese in any conventional sense of the word “understand”, at least as it is applied to language competence. Yet the man and his rules do seem capable of generating competent Chinese responses to real Chinese requests. However, it hardly seems possible to separate people from their language abilities in this way, and so it seems that Searle is asking us to use our intuitions to reason about a situation that is anything but intuitive. AI proponents have responded to Searle’s thought experiment with a variety of counter-arguments, but the System response, as it is generally known, offers one of the most robust defenses of strong AI. Proponents of the System response agree with Searle that the man alone does not understand Chinese. However, they also argue that the combined system – man plus rules – clearly does understand Chinese, as shown by its ability to answer real Chinese questions. To focus on the man alone is a kind of sleight of hand, of the bait and switch variety, and is no more sensible than asking if the eye alone can understand a painting, or the ear alone a poem, or whether a single neuron can understand a language.
Searle offers an ingenious riposte to the System reply. Imagine, he says, that the man is forced to memorize all of his rules, to the point where he no longer needs to refer to his rule books when generating Chinese outputs from Chinese inputs. The system is now reduced to the man alone, since memorization weaves the rules into the mental fabric of the man. Yet the man still does not understand Chinese, so the integrated system of man plus internalized rules is still just a fake.
Searle’s bravura retort to the System response assumes that memorization of the rule books allows the man’s higher-level cognitive processes to consciously execute the Chinese room algorithm by bringing to mind the necessary rules for each symbol or combination of symbols. The man continues to engage with the rules at a deliberative, conscious level, and he still has no more understanding of each symbolic “squiggle” than he did when he relied upon his books as an external memory store. In computational terms, the unfortunate drudge has simply been “upgraded”, from a old-fashioned system that used external punch-cards to a shiny new version with an on-board program memory. No matter how practiced the man becomes in the execution of his newly memorized rules, his processes all still occur above the level of symbol-grounding, and the symbols remain ungrounded in the man's own experiences of the world. As Texans are wont to say of fake cowboys, this new man is “all hat and no cattle”.
This is a masterful sleight of hand on Searle’s part. No one expects to be able to play tennis like Roger Federer or to play chess like Garry Kasparov just by reading a how-to book by either of these accomplished sportsmen. We intuitively recognize that such books offer knowledge that can be appreciated by our higher-level cognitive processes, but this knowledge can only facilitate the acquisition of an ingrained skill through continuous physical practice and sheer, hard graft. We acquire the knowledge via conscious processes, but even perfect memorization of a book is not enough to create the corresponding muscle memories and cerebellar pathways. We only truly acquire a skill when this how-to knowledge is transformed into our network of unconscious, automatic associations. And so it goes for the memorization of rules in Searle’s Chinese room. The symbols remain inscrutable to the man because no meaningful associations have been forged to the man's memories and experiences of the world; the symbols evoke no stereotypes or mental imagery, they stir no feelings or emotions, they prime no related ideas and they initiate no waves of spreading activation. But how might these automatic associations be acquired by our man in the room? Most likely, the man would need to be exposed to each symbol and its corresponding rules of usage in contexts that repeatedly reinforce the appropriate images, emotions and related ideas (e.g. sunshine and flowers evoke happiness, rainclouds and funerals evoke sadness, etc.). This calls for a situated approach to language learning, much like the situated manner in which humans learn a language.
Searle might well protest that for the man to memorize his rules in this way is really no different to him learning Chinese in the first place. But such an objection would expose the hidden circularity at the core of Searle’s argument, which implicitly assumes that men can truly learn Chinese while machines never can. This implicit assumption seems to be based on yet another: that computers can mimic the rule-like, logical thought processes of a human, but cannot possess lower-level, experience-grounded processes of their own. Any intelligent-seeming human-like output from a machine is thus a fake. It may be an excellent fake, like a painting by Picasso with all the right brushstrokes but none of the right feelings, but it must be a fake nonetheless. Searle’s Chinese room was designed to satirize early developments in AI that were all rules and no intuition, developments which were promising but vastly overhyped. But just as these early systems and their design philosophies do not represent the totality of work in AI, neither does Searle’s thought experiment preclude the development of computers with intricate association networks of grounded symbols that can serve to anchor real rather than fake understanding. Recall Douglas Hofstadter’s admonition of Nagel and Newman, which effectively called for the development of intuitive processes in computers: “A subtler challenge would be to devise ‘a fixed set of directives’ by which a computer might explore the world … guided by visual imagery, the associative patterns linking concepts, and the intuitive processes of guesswork, analogy, and aesthetic choice.” Searle’s Chinese room certainly shows that Hofstadter’s challenge must be met, but it fails to show that that all our efforts in this respect are necessarily futile.