Can a computer think? John Searle famously used the Chinese Room thought experiment to suggest that it can't. Daniel Dennett is not convinced. He thinks that Searle's thought experiment is what he calls a 'boom crutch' - a faulty intuition pump. Here, in conversation with Nigel Warburton, he explains why.
Listen to Daniel Dennett on the Chinese Room
Listen to an earlier Philosophy Bites interview with Daniel Dennett on Free Will Worth Wanting
The interview with Dennett on the Chinese Room suggests that Dennett has a very strange idea of "understanding." In the interview, they discuss that Searle shows (or tries to show) that the Chinese Room thought experiment demonstrates that the computer translating Chinese has no real understanding of Chinese. Dennett responds that Searle misses (what he insists all computer coders know) and that's that the "understanding" is not in the code per se but is in some sense distributed throughout the whole computer system. But doesn't this imply that my desktop computer has in some sense some "understanding" about the tasks it does? This strikes me as a very weird argument.
Posted by: George | July 15, 2013 at 07:53 PM
@George: I don't see anything weird about Dennett's argument. Your desktop computer does not understand anything *about* the tasks it does. For instance, a chess program usually doesn't contain any code that lets the program understand such concepts as 'game' or 'contest' or 'winning'. If it can be said to 'understand' anything, it would be how to play chess, which is all it's programmed to do. In order for it to understand anything about what it's doing, one might have to write software for this purpose.
In the Chinese room thought experiment, the system is designed for human level understanding, which is of course very different from an ordinary desktop computer.
Posted by: tor | July 17, 2013 at 01:08 AM
Think of it like this: an individual neuron does not understand chinese either. In Searle's thought experiment he - an individual in a room - is the equivalent of a neuron. If he had a helper that might be the equivalent of another neuron. All of his tools are the equivalent of either other neurons or the chemical/electrical interactions between neurons. All of this stuff by itself does not understand Chinese. Put it together and yes it does understand Chinese.
And yes "in some sense" your computer does understand the tasks it does. It probably isn't sophisticated enough to reflect on those tasks or to feel good or bad about those tasks, but those "humanesque" aspects are all things programmed in the language of neural connections. If you were able to program something similarly sophisticated into your computer, then it would have those same "humanesque" qualities.
Posted by: Jim Meyers | July 18, 2013 at 11:45 AM
This felt like a bit of an exercise in point-missing. The point of the Chinese Room, I take it, is that algorithm execution != "understanding". Arguing about which bits of the algorithm reside where and how complex that algorithm would have to be seems to make sense only if you've got some theory about "understanding" that makes it emergent from structure or complexity or some mixture of the two. Without that being laid out, it seems like Searle is saying '"understanding" relies on some mysterious thing not captured in the algorithm called "consciousness"' and Dennett is saying '"understanding" can be captured in the algorithm, as long as the algorithm is sufficiently mysterious". Which doesn't seem like a huge advance. Perhaps he unpacks this elsewhere ...?
Posted by: Timothy Hill | July 18, 2013 at 04:54 PM
I agree with you, Timothy. The point of the Chinese Room argument (for Searle) is to demonstrate a process where an outside party would see correct translation from English to Chinese, even though no one doing the translating understands Chinese. "Understanding" in this sense is simply understanding Chinese. But this also shows that there is no reason to believe any computer executing instructions (no matter how sophisticated) has any understanding of what it is doing. Dennet (from what I can tell) is trying to argue that there is "understanding" not just in the code but distributed throughout the computer system. But I don't think this argument succeeds, because Searle designed his thought his experiment with no one involved in the room having any understanding of Chinese.
And I'm afraid I don't see how my computer "understands" the tasks it is doing. I don't see that programming something into a computer (say a sophisticated program that will simulate speech) will give it understanding of what it is saying. It is simply a machine executing instructions.
Posted by: George | July 23, 2013 at 08:58 PM
I should probably let this go, but the more I think about it, the stranger it seems to me to analyse 'understanding' (as opposed to 'processing') a language purely in terms of linguistic algorithms. I would have thought minimally, *understanding* a language would involve:
(a) competency with the rules and vocabulary of the language
(b) awareness of the items or processes in some kind of (possibly virtual) world to which elements of that language refer
(c) an ability to map the elements of the language against elements of the world and assess their truth value
and maybe even
(d) an ability to account for and correct discrepancies in mapping
Of course, all of these could in principle be implemented algorithmically, and if one imagines a virtual world consisting of, say, only two states and a language restricted to describing those states, the algorithms themselves might even be trivial. But it seems to me that this situation might be a minimal context in which it makes sense to start talking about 'understanding' the 'meaning' of a language - rather than viewing it as a function of the complexity of the linguistic algorithm per se.
Posted by: Timothy Hill | July 24, 2013 at 12:10 AM
If asked who is one person I would like to have a conversation with in my life it would be Dan Dennett. He's one of the atheists who respects alot of the religious tradition but questions its present day value because of so many unanswered questions about the human mind which will explain religion and human cultural anthropology.
Unlike Dan I'm not a big fan AI not because of the Chinese Room but because of the False Liver Argument : A demon removes the liver from a man's body while he's asleep and puts a mechanical liver in the body which actually appears to be identical to a real liver and connects to all of the necessary digestive organs. Only difference is every night the demon puts a fresh supply of bile in the liver while the man is asleep. Functionally this liver is identical to a real over but it is not a real liver because it does not produce bile as an AI simulation does not produce consciousness.
Posted by: Victor Panzica | July 26, 2013 at 07:52 PM