One of the objections to Searle is the systems reply. It basically states that the man in the box is only part of a system of understanding inside of us. The system as a whole understands Chinese even if this individual part does not. Searle’s response:

“[…]let the individual internalize all of these elements of the system. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does all the calculations in his head. The individual then incorporates the entire system. There isn’t anything at all to the system that he does not encompass. We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn’t anything in the system that isn’t in him. If he doesn’t understand, then there is no way the system could understand because the system is just a part of him.”

Here Searle states, that if the man in the room does not understand, the system does not understand because the system is just a part of him. I will argue that Searle is only dodging the actual response. If the man is the metaphor of the inner workings of the robot and if Searle’s machine can only understand if the inner working understands, then it is either arguably achievable, or we run into an infinite string of the “man inside the machine”.

For example, Searle might respond that the inner workings of the robot cannot understand a language. But where is his evidence for this? Clearly Searle says if the man doesn’t understand the system does not understand. But if the man understands then the system arguably should understand. Searle doesn’t state this , but it follows the logic of his objection. Searle is saying that if the man does not understand the system does not understand because the man is part of the system. If any “part” of the machine actually understands the language we would have to say that the system of the whole understands as well. This is why Searle makes the move to say that if the man does not understand the system cannot understand because most would say if any part of the machine understands the machine as a whole understands as well. If he allowed any other case he would be agreeing with the systems objection (that the system understands even if the man doesn’t). Searle does not want to allow the part to understand and the whole to not understand because that would not make sense. He also does not want to allow the whole to understand if the part does not (which would be the systems response and rule his Chinese room obsolete). So basically if the part does understand then the whole should understand, which is the only other option. Searle is arguing the robot cannot know if the man does not understand, so if the man does understand the robot (whole) should be able to understand. Now who is to say the inner workings cannot understand? Under Searle’s example, let us imagine the room. We have input, man, program(instructions), and output. Searle’s fulcrum is in the man. As an example, let us give as input the same instructions over and over again. One Chinese character is shown and the english instructions always tell him to give out X chinese symbol. We give him the same instructions for hours, DAYS even. Eventually he no longer needs the English instruction. He KNOWS which symbol to throw out. We then repeat this process with every possible Chinese combination of words. Any output one could think of for X input he has now memorized. The man has no idea what the input means. Chinese symbol X could mean “Do you like rice?” yet he doesn’t know this, he only know to throw out Y which happens to be “Yes I do its delicious.” Yet, the man will eventually leave the box. And if he encountered a Chinese person who passed him a note, he would write back a response even if he did not know the response to mean anything. Yet, the person passing the note believes he actually understands the language. Why? Because we are not talking about a robot anymore we are talking about a human being. We assume a robot is just programmed to work a certain way. So that Chinese person would be skeptical of C3PO passing him that note back. But a human he would not. Here is where a problem comes about. Either we allow the man’s understanding to b real “understanding” or we could have to question the inner workings of every person on the face of the planet. What if as we speak to each other I am not processing your language the way you are? Am I wrong? This man is obviously not processing the language the same way, but does he not understand your language? If you were not able to look inside his mind you could not judge his understanding. Yet, in the robots case were allowed to know his inner workings and judge it. But the thing is we do not even know how language itself works, its why we even have a philosophy of language in the first place. If this is the case we should not be able to make the move that the inner workings of the robots understand or don’t understand. I would say that the man indeed does understand because I would never be able to know of his training in the Chinese room. I would only think he understood my note and passed me back “yes rice is delicious.” Either we can only judge understanding by knowing the inner workings of each other’s minds, which is probably impossible. If that is the case the only the man’s output can be judged. If his output can only be judged he is seen to understand. If he is seen to understand then the Chinese room understands because if part(the man) understands whole (the room or robot) understands [relaying back to how Searle said the whole(the system) cannot understand if the part (the man) does not understand].

Leave a comment