Support Philosophy Bites

  • Donate in GB Pounds
  • Donate in Euros
  • Donate in US Dollars
  • Subscribe
    Payment Options

Your email address:

Powered by FeedBlitz

« Links to Bites interviews by Theme and by Interviewee | Main | News »

June 23, 2013



The interview with Dennett on the Chinese Room suggests that Dennett has a very strange idea of "understanding." In the interview, they discuss that Searle shows (or tries to show) that the Chinese Room thought experiment demonstrates that the computer translating Chinese has no real understanding of Chinese. Dennett responds that Searle misses (what he insists all computer coders know) and that's that the "understanding" is not in the code per se but is in some sense distributed throughout the whole computer system. But doesn't this imply that my desktop computer has in some sense some "understanding" about the tasks it does? This strikes me as a very weird argument.


@George: I don't see anything weird about Dennett's argument. Your desktop computer does not understand anything *about* the tasks it does. For instance, a chess program usually doesn't contain any code that lets the program understand such concepts as 'game' or 'contest' or 'winning'. If it can be said to 'understand' anything, it would be how to play chess, which is all it's programmed to do. In order for it to understand anything about what it's doing, one might have to write software for this purpose.
In the Chinese room thought experiment, the system is designed for human level understanding, which is of course very different from an ordinary desktop computer.

Jim Meyers

Think of it like this: an individual neuron does not understand chinese either. In Searle's thought experiment he - an individual in a room - is the equivalent of a neuron. If he had a helper that might be the equivalent of another neuron. All of his tools are the equivalent of either other neurons or the chemical/electrical interactions between neurons. All of this stuff by itself does not understand Chinese. Put it together and yes it does understand Chinese.

And yes "in some sense" your computer does understand the tasks it does. It probably isn't sophisticated enough to reflect on those tasks or to feel good or bad about those tasks, but those "humanesque" aspects are all things programmed in the language of neural connections. If you were able to program something similarly sophisticated into your computer, then it would have those same "humanesque" qualities.

Timothy Hill

This felt like a bit of an exercise in point-missing. The point of the Chinese Room, I take it, is that algorithm execution != "understanding". Arguing about which bits of the algorithm reside where and how complex that algorithm would have to be seems to make sense only if you've got some theory about "understanding" that makes it emergent from structure or complexity or some mixture of the two. Without that being laid out, it seems like Searle is saying '"understanding" relies on some mysterious thing not captured in the algorithm called "consciousness"' and Dennett is saying '"understanding" can be captured in the algorithm, as long as the algorithm is sufficiently mysterious". Which doesn't seem like a huge advance. Perhaps he unpacks this elsewhere ...?


I agree with you, Timothy. The point of the Chinese Room argument (for Searle) is to demonstrate a process where an outside party would see correct translation from English to Chinese, even though no one doing the translating understands Chinese. "Understanding" in this sense is simply understanding Chinese. But this also shows that there is no reason to believe any computer executing instructions (no matter how sophisticated) has any understanding of what it is doing. Dennet (from what I can tell) is trying to argue that there is "understanding" not just in the code but distributed throughout the computer system. But I don't think this argument succeeds, because Searle designed his thought his experiment with no one involved in the room having any understanding of Chinese.

And I'm afraid I don't see how my computer "understands" the tasks it is doing. I don't see that programming something into a computer (say a sophisticated program that will simulate speech) will give it understanding of what it is saying. It is simply a machine executing instructions.

Timothy Hill

I should probably let this go, but the more I think about it, the stranger it seems to me to analyse 'understanding' (as opposed to 'processing') a language purely in terms of linguistic algorithms. I would have thought minimally, *understanding* a language would involve:

(a) competency with the rules and vocabulary of the language
(b) awareness of the items or processes in some kind of (possibly virtual) world to which elements of that language refer
(c) an ability to map the elements of the language against elements of the world and assess their truth value

and maybe even

(d) an ability to account for and correct discrepancies in mapping

Of course, all of these could in principle be implemented algorithmically, and if one imagines a virtual world consisting of, say, only two states and a language restricted to describing those states, the algorithms themselves might even be trivial. But it seems to me that this situation might be a minimal context in which it makes sense to start talking about 'understanding' the 'meaning' of a language - rather than viewing it as a function of the complexity of the linguistic algorithm per se.

Victor Panzica

If asked who is one person I would like to have a conversation with in my life it would be Dan Dennett. He's one of the atheists who respects alot of the religious tradition but questions its present day value because of so many unanswered questions about the human mind which will explain religion and human cultural anthropology.

Unlike Dan I'm not a big fan AI not because of the Chinese Room but because of the False Liver Argument : A demon removes the liver from a man's body while he's asleep and puts a mechanical liver in the body which actually appears to be identical to a real liver and connects to all of the necessary digestive organs. Only difference is every night the demon puts a fresh supply of bile in the liver while the man is asleep. Functionally this liver is identical to a real over but it is not a real liver because it does not produce bile as an AI simulation does not produce consciousness.

The comments to this entry are closed.