06.09.2021 Views

Mind, Body, World- Foundations of Cognitive Science, 2013a

Mind, Body, World- Foundations of Cognitive Science, 2013a

Mind, Body, World- Foundations of Cognitive Science, 2013a

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

if they are weakly equivalent), accomplish this with the same algorithm, and bring<br />

this algorithm to life with the same architecture. <strong>Cognitive</strong> scientists are in the business<br />

<strong>of</strong> making observations that establish the strong equivalence <strong>of</strong> their models<br />

to human thinkers.<br />

Classical cognitive science collects these observations by measuring particular<br />

behaviours that are unintended consequences <strong>of</strong> information processing, and which<br />

can therefore reveal the nature <strong>of</strong> the algorithm that is being employed. Newell<br />

and Simon (1972) named these behaviours second-order effects; in Chapter 2 these<br />

behaviours were called artifacts, to distinguish them from the primary or intended<br />

responses <strong>of</strong> an information processor. In Chapter 2, I discussed three general<br />

classes <strong>of</strong> evidence related to artifactual behaviour: intermediate state evidence,<br />

relative complexity evidence, and error evidence.<br />

Note that although similar in spirit, the use <strong>of</strong> these three different types <strong>of</strong><br />

evidence to determine the relationship between the algorithms used by model and<br />

subject is not the same as something like the Total Turing Test. Classical cognitive<br />

science does not require physical correspondence between model and subject.<br />

However, algorithmic correspondences established by examining behavioural artifacts<br />

put much stronger constraints on theory validation than simply looking for<br />

stimulus-response correspondences. To illustrate this, let us consider some examples<br />

<strong>of</strong> how intermediate state evidence, relative complexity evidence, and error evidence<br />

can be used to validate models.<br />

One important source <strong>of</strong> information that can be used to validate a model is<br />

intermediate state evidence (Pylyshyn, 1984). Intermediate state evidence involves<br />

determining the intermediate steps that a symbol manipulator takes to solve a problem,<br />

and then collecting evidence to determine whether a modelled subject goes<br />

through the same intermediate steps. Intermediate state evidence is notoriously difficult<br />

to collect, because human information processors are black boxes—we cannot<br />

directly observe internal cognitive processing. However, clever experimental paradigms<br />

can be developed to permit intermediate states to be inferred.<br />

A famous example <strong>of</strong> evaluating a model using intermediate state evidence<br />

is found in some classic and pioneering research on human problem solving<br />

(Newell & Simon, 1972). Newell and Simon collected data from human subjects as<br />

they solved problems; their method <strong>of</strong> data collection is known as protocol analysis<br />

(Ericsson & Simon, 1984). In protocol analysis, subjects are trained to think out loud<br />

as they work. A recording <strong>of</strong> what is said by the subject becomes the primary data<br />

<strong>of</strong> interest.<br />

The logic <strong>of</strong> collecting verbal protocols is that the thought processes involved in<br />

active problem solving are likely to be stored in a person’s short-term memory (STM),<br />

or working memory. <strong>Cognitive</strong> psychologists have established that items stored in<br />

such a memory are stored as an articulatory code that permits verbalization to<br />

98 Chapter 3

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!