06.09.2021 Views

Mind, Body, World- Foundations of Cognitive Science, 2013a

Mind, Body, World- Foundations of Cognitive Science, 2013a

Mind, Body, World- Foundations of Cognitive Science, 2013a

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

In the most abstract sense, both a model and a modelled agent can be viewed as<br />

opaque devices, black boxes whose inner workings are invisible. From this perspective,<br />

both are machines that convert inputs or stimuli into outputs or responses;<br />

their behaviour computes an input-output function (Ashby, 1956, 1960). Thus the<br />

most basic point <strong>of</strong> contact between a model and its subject is that the input-output<br />

mappings produced by one must be identical to those produced by the other.<br />

Establishing this fact is establishing a relationship between model and subject at<br />

the computational level.<br />

To say that a model and subject are computing the same input-output function<br />

is to say that they are weakly equivalent. It is a weak equivalence because it is established<br />

by ignoring the internal workings <strong>of</strong> both model and subject. There are an<br />

infinite number <strong>of</strong> different algorithms for computing the same input-output function<br />

(Johnson-Laird, 1983). This means that weak equivalence can be established<br />

between two different systems that use completely different algorithms. Weak<br />

equivalence is not concerned with the possibility that two systems can produce the<br />

right behaviours but do so for the wrong reasons.<br />

Weak equivalence is also sometimes known as Turing equivalence. This is<br />

because weak equivalence is at the heart <strong>of</strong> a criterion proposed by computer pioneer<br />

Alan Turing, to determine whether a computer program had achieved intelligence<br />

(Turing, 1950). This criterion is called the Turing test.<br />

Turing (1950) believed that a device’s ability to participate in a meaningful conversation<br />

was the strongest test <strong>of</strong> its general intelligence. His test involved a human<br />

judge conducting, via teletype, a conversation with an agent. In one instance, the<br />

agent was another human. In another, the agent was a computer program. Turing<br />

argued that if the judge could not correctly determine which agent was human then<br />

the computer program must be deemed to be intelligent. A similiar logic was subscribed<br />

to by Descartes (2006). Turing and Descartes both believed in the power<br />

<strong>of</strong> language to reveal intelligence; however, Turing believed that machines could<br />

attain linguistic power, while Descartes did not.<br />

A famous example <strong>of</strong> the application <strong>of</strong> the Turing test is provided by a model <strong>of</strong><br />

paranoid schizophrenia, PARRY (Kosslyn, Ball, & Reiser, 1978). This program interacted<br />

with a user by carrying on a conversation—it was a natural language communication<br />

program much like the earlier ELIZA program (Weizenbaum, 1966).<br />

However, in addition to processing the structure <strong>of</strong> input sentences, PARRY<br />

also computed variables related to paranoia: fear, anger, and mistrust. PARRY’s<br />

responses were thus affected not only by the user’s input, by also by its evolving<br />

affective states. PARRY’s contributions to a conversation became more paranoid as<br />

the interaction was extended over time.<br />

A version <strong>of</strong> the Turing test was used to evaluate PARRY’s performance (Colby<br />

et al., 1972). Psychiatrists used teletypes to interview PARRY as well as human<br />

Elements <strong>of</strong> Classical <strong>Cognitive</strong> <strong>Science</strong> 95

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!