The Chinese Room

The Chinese Room, sometimes referred to as the Chinese Box, is a thought-experiment invented by John Searle to debunk “strong AI.”

Searle’s argument is that you put an English speaker in a little room. Slips of paper with Chinese are passed in; the English speaker refers to a huge compendium of rules for analyzing and responding to these slips; he follows these rules, produces new slips in response, and passes them out of the room. To a Chinese speaker on the outside, these would appear to be perfectly reasonable responses to the statements on the slips inside, but (according to Searle) that doesn’t mean that the guy in the room understands Chinese.

Jenny and I have long used the Chinese Room as a metaphor for the translation process in some of our knottier jobs–not so much in terms of our weakness with language but with the field of knowledge. I was recently asked to do a mercifully short job on seismology (about which I know almost nothing) that put me in mind of this. The job contained terms that I don’t know in Japanese, and when I found their English equivalents (or in some cases, what I was guessing to be their English equivalents), I dutifully typed them into my translation with only the most superficial idea what they might really mean. Chinese Room. When we find ourselves in situations like this, we just clench our sphincters and hope that the eventual target audience will know what the hell we’re talking about, because we sure don’t.

But thinking about the original Chinese Room argument (and surrounding debate, which is extensive) is frustrating because it is so perfectly hypothetical. Searle’s point was to create an analog to the Turing Test (digression: I just learned that, quite fortuitously, today would have been Alan Turing’s 92nd birthday) that would show up the absurdity of AI. The problem with his argument is that it’s so procedural, so mechanistic. The idea is that there can be a rote response for every input. (This is pretty much the same problem that machine translation today has.) The Chinese room would probably need to be infinitely large to accommodate all the rule books, and it would certainly take an infinite amount of time to prepare those books.

One of the primary arguments against Searle was that the guy in the room might not know Chinese, but the system (of which he is a part) does know it. OK, Searle responds, suppose the guy memorizes all the rulebooks so he doesn’t need to be in the room anymore: he still wouldn’t know Chinese. Aside from it being an improbable memory feat, I’d argue that yes, actually, he probably would. How can you memorize all those characters and rules for dealing with them without developing some kind of internal model of how the language works? One that would allow you to consolidate all the redundancy that would need to be present in the rule books, etc. Sounds a lot like language acquisition to me. In order for Searle’s argument to work, the human would need to be as dumb as the computer, in which case, he’d be undercutting his own argument anyhow. (Digression: I’ve always been struck by how much native fluency in language is basically a matter of following a script: I noticed in Japan that whole conversations would sometimes follow a script with only one or two decision points along the way–other than that, they were entirely ritualized. But in English as well, there are so many ritualistic utterances used in specific situations, or in response to the last ritualistic utterance, that one could probably pull off a pretty good simulation of English fluency by following a rule book with instructions like “when it’s very hot out, greet people by saying “Hot enough for ya?”. Etc.)

I realize this is a tangent to Searle’s original point, but perhaps it can pertain to AI in some way after all: perhaps what the machines really need to be smart is the capacity for abstraction, induction, and deduction. I know this is what some AI researchers are working on.