06.09.2021 Views

Mind, Body, World- Foundations of Cognitive Science, 2013a

Mind, Body, World- Foundations of Cognitive Science, 2013a

Mind, Body, World- Foundations of Cognitive Science, 2013a

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Bever, Fodor, and Garrett (1968) used this result to conclude that associationism<br />

(and radical behaviourism) was not powerful enough to deal with the embedded<br />

clauses <strong>of</strong> natural human language. As a result, they argued that associationism<br />

should be abandoned as a theory <strong>of</strong> mind. The impact <strong>of</strong> this pro<strong>of</strong> is measured<br />

by the lengthy responses to this argument by associationist memory researchers<br />

(Anderson & Bower, 1973; Paivio, 1986). We return to the implications <strong>of</strong> this argument<br />

when we discuss connectionist cognitive science in Chapter 4.<br />

While finite state automata cannot accept the recursive grammar used by<br />

Bever, Fodor, and Garrett (1968), Turing machines can (Révész, 1983). Their ability<br />

to move in both directions along the tape provides them with a memory that enables<br />

them to match the number <strong>of</strong> leading bs in a string with the number <strong>of</strong> trailing bs.<br />

Modern linguistics has concluded that the structure <strong>of</strong> human language must be<br />

described by grammars that are recursive. Finite state automata are not powerfulenough<br />

devices to accommodate grammars <strong>of</strong> this nature, but Turing machines are.<br />

This suggests that an information processing architecture that is sufficiently rich to<br />

explain human cognition must have the same power—must be able to answer the<br />

same set <strong>of</strong> questions—as do Turing machines. This is the essence <strong>of</strong> the physical<br />

symbol system hypothesis (Newell, 1980), which are discussed in more detail below.<br />

The Turing machine, as we saw in Chapter 2 and further discuss below, is a universal<br />

machine, and classical cognitive science hypothesizes that “this notion <strong>of</strong> symbol<br />

system will prove adequate to all <strong>of</strong> the symbolic activity this physical universe <strong>of</strong><br />

ours can exhibit, and in particular all the symbolic activities <strong>of</strong> the human mind”<br />

(Newell, 1980, p. 155).<br />

3.5 Underdetermination and Innateness<br />

The ability <strong>of</strong> a device to accept or generate a grammar is central to another computational<br />

level analysis <strong>of</strong> language (Gold, 1967). Gold performed a formal analysis<br />

<strong>of</strong> language learning which revealed a situation that is known as Gold’s paradox<br />

(Pinker, 1979). One solution to this paradox is to adopt a position that is characteristic<br />

<strong>of</strong> classical cognitive science, and which we have seen is consistent with its Cartesian<br />

roots. This position is that a good deal <strong>of</strong> the architecture <strong>of</strong> cognition is innate.<br />

Gold (1967) was interested in the problem <strong>of</strong> how a system could learn the<br />

grammar <strong>of</strong> a language on the basis <strong>of</strong> a finite set <strong>of</strong> example expressions. He considered<br />

two different situations in which the learning system could be presented<br />

with expressions. In informant learning, the learner is presented with either valid<br />

or invalid expressions, and is also told about their validity, i.e., told whether they<br />

belong to the grammar or not. In text learning, the only expressions that are presented<br />

to the learner are grammatical.<br />

72 Chapter 3

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!