06.09.2021 Views

Mind, Body, World- Foundations of Cognitive Science, 2013a

Mind, Body, World- Foundations of Cognitive Science, 2013a

Mind, Body, World- Foundations of Cognitive Science, 2013a

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

function, one can still translate Rescorla-Wagner learning into perceptron learning,<br />

and vice versa (Dawson, 2008).<br />

One would imagine that the existence <strong>of</strong> pro<strong>of</strong>s <strong>of</strong> the computational equivalence<br />

between Rescorla-Wagner learning and perceptron learning would mean that<br />

perceptrons would not be able to provide any new insights into classical conditioning.<br />

However, this is not correct. Dawson (2008) has shown that if one puts aside<br />

the formal comparison <strong>of</strong> the two types <strong>of</strong> learning and uses perceptrons to simulate<br />

a wide variety <strong>of</strong> different classical conditioning paradigms, then some puzzling<br />

results occur. On the one hand, perceptrons generate the same results as the<br />

Rescorla-Wagner model for many different paradigms. Given the formal equivalence<br />

between the two types <strong>of</strong> learning, this is not surprising. On the other hand,<br />

for some paradigms, perceptrons generate different results than those predicted<br />

from the Rescorla-Wagner model (Dawson, 2008, Chapter 7). Furthermore, in<br />

many cases these differences represent improvements over Rescorla-Wagner learning.<br />

If the two types <strong>of</strong> learning are formally equivalent, then how is it possible for<br />

such differences to occur?<br />

Dawson (2008) used this perceptron paradox to motivate a more detailed comparison<br />

between Rescorla-Wagner learning and perceptron learning. He found that<br />

while these two models <strong>of</strong> learning were equivalent at the computational level <strong>of</strong><br />

investigation, there were crucial differences between them at the algorithmic level.<br />

In order to train a perceptron, the network must first behave (i.e., respond to an<br />

input pattern) in order for error to be computed to determine weight changes. In<br />

contrast, Dawson showed that the Rescorla-Wagner model defines learning in such<br />

a way that behaviour is not required!<br />

Dawson’s (2008) algorithmic analysis <strong>of</strong> Rescorla-Wagner learning is consistent<br />

with Rescorla and Wagner’s (1972) own understanding <strong>of</strong> their model: “Independent<br />

assumptions will necessarily have to be made about the mapping <strong>of</strong> associative<br />

strengths into responding in any particular situation” (p. 75). Later, they make this<br />

same point much more explicitly:<br />

We need to provide some mapping <strong>of</strong> [associative] values into behavior. We are<br />

not prepared to make detailed assumptions in this instance. In fact, we would<br />

assume that any such mapping would necessarily be peculiar to each experimental<br />

situation, and depend upon a large number <strong>of</strong> ‘performance’ variables.<br />

(Rescorla & Wagner, 1972, p. 77)<br />

Some knowledge is tacit: we can know more than we can tell (Polanyi, 1966). Dawson<br />

(2008) noted that the Rescorla-Wagner model presents an interesting variant <strong>of</strong> this<br />

theme, where if there is no explicit need for a behavioural theory, then there is no<br />

need to specify it explicitly. Instead, researchers can ignore Rescorla and Wagner’s<br />

(1972) call for explicit models to convert associative strengths into behaviour<br />

and instead assume unstated, tacit theories such as “strong associations produce<br />

192 Chapter 4

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!