The Chinese Room

The Chinese Room, sometimes referred to as the Chinese Box, is a thought-experiment invented by John Searle to debunk “strong AI.”

Searle’s argument is that you put an English speaker in a little room. Slips of paper with Chinese are passed in; the English speaker refers to a huge compendium of rules for analyzing and responding to these slips; he follows these rules, produces new slips in response, and passes them out of the room. To a Chinese speaker on the outside, these would appear to be perfectly reasonable responses to the statements on the slips inside, but (according to Searle) that doesn’t mean that the guy in the room understands Chinese.

Jenny and I have long used the Chinese Room as a metaphor for the translation process in some of our knottier jobs–not so much in terms of our weakness with language but with the field of knowledge. I was recently asked to do a mercifully short job on seismology (about which I know almost nothing) that put me in mind of this. The job contained terms that I don’t know in Japanese, and when I found their English equivalents (or in some cases, what I was guessing to be their English equivalents), I dutifully typed them into my translation with only the most superficial idea what they might really mean. Chinese Room. When we find ourselves in situations like this, we just clench our sphincters and hope that the eventual target audience will know what the hell we’re talking about, because we sure don’t.

But thinking about the original Chinese Room argument (and surrounding debate, which is extensive) is frustrating because it is so perfectly hypothetical. Searle’s point was to create an analog to the Turing Test (digression: I just learned that, quite fortuitously, today would have been Alan Turing’s 92nd birthday) that would show up the absurdity of AI. The problem with his argument is that it’s so procedural, so mechanistic. The idea is that there can be a rote response for every input. (This is pretty much the same problem that machine translation today has.) The Chinese room would probably need to be infinitely large to accommodate all the rule books, and it would certainly take an infinite amount of time to prepare those books.

One of the primary arguments against Searle was that the guy in the room might not know Chinese, but the system (of which he is a part) does know it. OK, Searle responds, suppose the guy memorizes all the rulebooks so he doesn’t need to be in the room anymore: he still wouldn’t know Chinese. Aside from it being an improbable memory feat, I’d argue that yes, actually, he probably would. How can you memorize all those characters and rules for dealing with them without developing some kind of internal model of how the language works? One that would allow you to consolidate all the redundancy that would need to be present in the rule books, etc. Sounds a lot like language acquisition to me. In order for Searle’s argument to work, the human would need to be as dumb as the computer, in which case, he’d be undercutting his own argument anyhow. (Digression: I’ve always been struck by how much native fluency in language is basically a matter of following a script: I noticed in Japan that whole conversations would sometimes follow a script with only one or two decision points along the way–other than that, they were entirely ritualized. But in English as well, there are so many ritualistic utterances used in specific situations, or in response to the last ritualistic utterance, that one could probably pull off a pretty good simulation of English fluency by following a rule book with instructions like “when it’s very hot out, greet people by saying “Hot enough for ya?”. Etc.)

I realize this is a tangent to Searle’s original point, but perhaps it can pertain to AI in some way after all: perhaps what the machines really need to be smart is the capacity for abstraction, induction, and deduction. I know this is what some AI researchers are working on.

9 thoughts on “The Chinese Room”

  1. Forrest Hoover

    What you are talking about is that the most effective way to aquire lanquage is through interaction. It is indeed the interactionist approacch to lanquage learning as oppossed to the behaviorist approach of memorizing grammar, rules etc. Surpisingly little lanquage is really learned in rote grammar type study. It is in the application, meaningful interactions and the negotatiated meaning where the majority of lanquage is learned.

    For example, as you know i fluncked 2 years of spanish in high school. It had little context. In the peace corps, I became fleunt in spanish in less than a year. Sure high school gave me a base, but it wothout meaningful interaction it meant little.

    Still I am not sure that AI is out completley. The latest brain research is fiqureing out just where and how lanquage is learned and stored. It is only a matter of time until a chip can be used in conjuction with the human brain to produce the proverbial babbel fish. Really the trech is not so different to the optic implants that can allow blind people to see.

  2. I’m a cognitive science student, and these expressions of criticism to Searle are exactly in line with what I tend to encounter along the way. Summed up rather neatly – I couldn’t agree more. Brilliant post.

  3. dragonfly jenny

    “How can you memorize all those characters and rules for dealing with them without developing some kind of internal model of how the language works? One that would allow you to consolidate all the redundancy that would need to be present in the rule books, etc. Sounds a lot like language acquisition to me.”

    Very well put.

    I just got a burst of inspiration: Into the Chinese box, also put 10,000 monkeys and 10,000 typewriters. Then you’d have a mechanistic system for both translating Chinese and generating the works of Shakespeare, which could then be translated into Chinese.

  4. Well, I’m *not* a cognitive science student, which may explain why Searle’s thought experiment doesn’t seem all that clear to me. I Googled up an extended explanation and collection of responses which make it seem more complex than what is presented here. For instance, the material available to the guy in the room is organized into some components with different functions which correspond to a specific AI program that Searle was responding to.

    Adam, you say that if the guy memorizes the rules so he can provide the right output without reference to the rulebook, it sounds like language acquisition to you. But Searle specifically says that the guy in the box doesn’t ever understand the meaning of the symbols. If he starts deducing semantics as well as syntax, it violates the model. So for my money, at least, it’s not like language acquisition at all.

    That said, I like the Chinese box as a metaphor. I’ve certainly done my share of faking foreign language ability beyond my actual understanding. It reminds me of the analogous practice of cargo-cult programming.

  5. The human in the Chinese box would have to be singularly dull and uninquisitive to fail to learn anything at all about the Chinese language. Kind of like our current POTUS, maybe. It is more believable if there are dozens or hundreds of humans in the box, shuffling data-items around.

    In this case, it is really equivalent to the ant colony-brain in “Goedel, Escher, Bach”. Or, in more familiar terms, the people in the box are like neurons, or groups of neurons, firing in the brain of a bilingual translator.

    BTW it may just have been the environment, but no-one in the philosophy departments of either Toronto Univ. or Indiana Univ. seemed to take Searle’s Chinese box very seriously, at least in the seminars I attended.

  6. I think the primary problem with the Chinese Room argument has always been that its depends on the reader deciding something is absurd just because it sounds bad. We don’t want to say the room knows Chinese–does that necessarily mean it isn’t so? Since we’re on a subject we know little about anyway, who knows–maybe consciousness itself arises from the sort of algorithmic procedures necessary to encode language in the first place, whether in brains or on paper or worked out by rote calculation in an oblivious mind?

    The point is, intuition alone just isn’t enough to get to the conclusion Searle was on about–all sorts of very solid scientific knowledge is, after all, counter-intuitive.

    I wonder… If a Chinese room calls out for Schrodenger’s cat while it’s observing a two-slit experiment–oh never mind. ;)

  7. “I just got a burst of inspiration: Into the Chinese box, also put 10,000 monkeys and 10,000 typewriters. Then you’d have a mechanistic system for both translating Chinese and generating the works of Shakespeare, which could then be translated into Chinese.”

    Why not just use Chinese monkeys? Seems to me that would save a step. :-p

  8. I think that, while the Chinese Room is in no way a convincing argument against “strong AI,” it does nonetheless have at least one important point, touched upon by the first poster talking about their experiences learning Spanish:

    The crucial thing about this thought experiment is the implication that the vast collection of books with systems of algorithms for producing “meaningful” output is a static entity, in a couple of different ways. For one thing, it doesn’t sound like there’s any way for the system to learn and change, unless some of the algorithms include istructions to re-write the algorithms themselves based on input.

    More importantly, though, the meanings encoded in the room have no direct relationship to the real world (anymore). By this I mean that the room can say “the sky is blue” in Chinese without knowing what blue looks like. I don’t think the guy in the room would ever figure out what the room is saying, because he’s only ever given the relationship of the words to each other, not the relationship of the words to actual experiences.

    So, I think that this thought experiment is good for showing that “strong AI” needs to be grounded in human-like perceptions in order to be anything like a human mind.

Leave a Comment

Your email address will not be published. Required fields are marked *