06.09.2021 Views

Mind, Body, World- Foundations of Cognitive Science, 2013a

Mind, Body, World- Foundations of Cognitive Science, 2013a

Mind, Body, World- Foundations of Cognitive Science, 2013a

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Whether a learner is undergoing informant learning or text learning, Gold<br />

(1967) assumed that learning would proceed as a succession <strong>of</strong> presentations <strong>of</strong><br />

expressions. After each expression was presented, the language learner would<br />

generate a hypothesized grammar. Gold proposed that each hypothesis could be<br />

described as being a Turing machine that would either accept the (hypothesized)<br />

grammar or generate it. In this formalization, the notion <strong>of</strong> “learning a language”<br />

has become “selecting a Turing machine that represents a grammar” (Osherson,<br />

Stob, & Weinstein, 1986).<br />

According to Gold’s (1967) algorithm, a language learner would have a current<br />

hypothesized grammar. When a new expression was presented to the learner, a test<br />

would be conducted to see if the current grammar could deal with the new expression.<br />

If current grammar succeeded, then it remained. If the current grammar<br />

failed, then a new grammar—a new Turing machine—would have to be selected.<br />

Under this formalism, when can we say that a grammar has been learned? Gold<br />

defined language learning as the identification <strong>of</strong> the grammar in the limit. When<br />

a language is identified in the limit, this means that the current grammar being<br />

hypothesized by the learner does not change even as new expressions are encountered.<br />

Furthermore, it is expected that this state will occur after a finite number <strong>of</strong><br />

expressions have been encountered during learning.<br />

In the previous section, we considered a computational analysis in which different<br />

kinds <strong>of</strong> computing devices were presented with the same grammar. Gold (1967)<br />

adopted an alternative approach: he kept the information processing constant—<br />

that is, he always studied the algorithm sketched above—but he varied the complexity<br />

<strong>of</strong> the grammar that was being learned, and he varied the conditions under<br />

which the grammar was presented, i.e., informant learning versus text learning.<br />

In computer science, a formal description <strong>of</strong> any class <strong>of</strong> languages (human or<br />

otherwise) relates its complexity to the complexity <strong>of</strong> a computing device that could<br />

generate or accept it (Hopcr<strong>of</strong>t & Ullman, 1979; Révész, 1983). This has resulted in<br />

a classification <strong>of</strong> grammars known as the Chomsky hierarchy (Chomsky, 1959a). In<br />

the Chomsky hierarchy, the simplest grammars are regular, and they can be accommodated<br />

by finite state automata. The next most complicated are context-free grammars,<br />

which can be processed by pushdown automata (a device that is a finite state<br />

automaton with a finite internal memory). Next are the context-sensitive grammars,<br />

which are the domain <strong>of</strong> linear bounded automata (i.e., a device like a Turing<br />

machine, but with a ticker tape <strong>of</strong> bounded length). The most complex grammars are<br />

the generative grammars, which can only be dealt with by Turing machines.<br />

Gold (1967) used formal methods to determine the conditions under which<br />

each class <strong>of</strong> grammars could be identified in the limit. He was able to show that<br />

text learning could only be used to acquire the simplest grammar. In contrast, Gold<br />

Elements <strong>of</strong> Classical <strong>Cognitive</strong> <strong>Science</strong> 73

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!