06.09.2021 Views

Mind, Body, World- Foundations of Cognitive Science, 2013a

Mind, Body, World- Foundations of Cognitive Science, 2013a

Mind, Body, World- Foundations of Cognitive Science, 2013a

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Decision Point<br />

From Table 4-4<br />

Equivalent Production<br />

Network<br />

Cluster<br />

Rule 4 Poisonous<br />

P7: if (odor = none) (spore print colour = white) (gill<br />

size = narrow) ((stalk surface above ring = silky) <br />

(stalk surface above ring = scaly)) not edible<br />

5<br />

Rule 5 Edible<br />

P8: if (odor = none) (spore print colour = white) (gill<br />

size = narrow) (stalk surface above ring = smooth) <br />

(bruises = no) edible<br />

8 or 12<br />

Rule 5 Poisonous<br />

P9: if (odor = none) (spore print colour = white) (gill<br />

size = narrow) (stalk surface above ring = smooth) <br />

(bruises = yes) not edible<br />

10<br />

Table 4-5. Dawson et al.’s (2000) production system translation <strong>of</strong> Table 4-4.<br />

Conditions are given as sets <strong>of</strong> features. The Network Cluster column pertains<br />

to their artificial neural network trained on the mushroom problem and is<br />

described later in the text.<br />

The other nine output units were used to provide extra output learning, which was<br />

the technique employed to insert a classical theory into the network. Normally, a<br />

pattern classification system is only provided with information about what correct<br />

pattern labels to assign. For instance, in the mushroom problem, the system<br />

would typically only be taught to generate the label edible or the label poisonous.<br />

However, more information about the pattern classification task is frequently available.<br />

In particular, it is <strong>of</strong>ten known why an input pattern belongs to one class<br />

or another. It is possible to incorporate this information to the pattern classification<br />

problem by teaching the system not only to assign a pattern to a class (e.g.,<br />

“edible”, “poisonous”) but to also generate a reason for making this classification<br />

(e.g., “passed Rule 1”, “failed Rule 4”). Elaborating a classification task along such<br />

lines is called the injection <strong>of</strong> hints or extra output learning (Abu-Mostafa, 1990;<br />

Suddarth & Kergosien, 1990).<br />

Dawson et al. (2000) hypothesized that extra output learning could be used<br />

to insert the decision tree from Table 4-4 into a network. Table 4-4 provides nine<br />

different terminal branches <strong>of</strong> the decision tree at which mushrooms are assigned<br />

to categories (“Rule 1 edible”, “Rule 1 poisonous”, “Rule 2 edible”, etc.). The network<br />

learned to “explain” why it classified an input pattern in a particular way by turning<br />

on one <strong>of</strong> the nine extra output units to indicate which terminal branch <strong>of</strong> the decision<br />

tree was involved. In other words, the network (which required 8,699 epochs<br />

<strong>of</strong> training on the 8,124 different input patterns!) classified networks “for the same<br />

Elements <strong>of</strong> Connectionist <strong>Cognitive</strong> <strong>Science</strong> 181

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!