06.09.2021 Views

Mind, Body, World- Foundations of Cognitive Science, 2013a

Mind, Body, World- Foundations of Cognitive Science, 2013a

Mind, Body, World- Foundations of Cognitive Science, 2013a

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

experiment, people write questions in Chinese symbols and pass them through a<br />

slot into a room. Later, answers to these questions, again written in Chinese symbols,<br />

are passed back to the questioner. The philosophical import <strong>of</strong> the Chinese<br />

room arises when one looks into the room to see how it works.<br />

Inside the Chinese room is a native English speaker—Searle himself—who<br />

knows no Chinese, and for whom Chinese writing is a set <strong>of</strong> meaningless squiggles.<br />

The room contains boxes <strong>of</strong> Chinese symbols, as well as a manual for how<br />

to put these together in strings. The English speaker is capable <strong>of</strong> following these<br />

instructions, which are the room’s algorithm. When a set <strong>of</strong> symbols is passed into<br />

the room, the person inside can use the instructions and put together a new set <strong>of</strong><br />

symbols to pass back outside. This is the case even though the person inside the<br />

room does not understand what the symbols mean, and does not even know that the<br />

inputs are questions and the outputs are answers. Searle (1980) uses this example to<br />

challengingly ask where in this room is the knowledge <strong>of</strong> Chinese? He argues that it<br />

is not to be found, and then uses this point to argue against strong claims about the<br />

possibility <strong>of</strong> machine intelligence.<br />

But should we expect to see such knowledge if we were to open the door to the<br />

Chinese room and peer inside? Given our current discussion <strong>of</strong> the architecture, it<br />

would perhaps be unlikely to answer this question affirmatively. This is because if<br />

we could look inside the “room” <strong>of</strong> a calculating device to see how it works—to see<br />

how its physical properties bring its calculating abilities to life—we would not see<br />

the input-output mapping, nor would we see a particular algorithm in its entirety.<br />

At best, we would see the architecture and how it is physically realized in the calculator.<br />

The architecture <strong>of</strong> a calculator (e.g., the machine table <strong>of</strong> a Turing machine)<br />

would look as much like the knowledge <strong>of</strong> arithmetic calculations as Searle and<br />

the instruction manual would look like knowledge <strong>of</strong> Chinese. However, we would<br />

have no problem recognizing the possibility that the architecture is responsible for<br />

producing calculating behaviour!<br />

Because the architecture is simply the primitives from which algorithms are<br />

constructed, it is responsible for algorithmic behaviour—but doesn’t easily reveal<br />

this responsibility on inspection. That the holistic behaviour <strong>of</strong> a device would not<br />

be easily seen in the actions <strong>of</strong> its parts was recognized in Leibniz’ mill, an early<br />

eighteenth-century ancestor to the Chinese room.<br />

In his Monadology, Gottfried Leibniz wrote:<br />

Supposing there were a machine whose structure produced thought, sensation,<br />

and perception, we could conceive <strong>of</strong> it as increased in size with the same proportions<br />

until one was able to enter into its interior, as he would into a mill. Now, on<br />

going into it he would find only pieces working upon one another, but never would<br />

he find anything to explain Perception. It is accordingly in the simple substance,<br />

50 Chapter 2

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!