06.09.2021 Views

Mind, Body, World- Foundations of Cognitive Science, 2013a

Mind, Body, World- Foundations of Cognitive Science, 2013a

Mind, Body, World- Foundations of Cognitive Science, 2013a

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

The Turing equivalence <strong>of</strong> connectionist networks has long been established.<br />

McCulloch and Pitts (1943) proved that a network <strong>of</strong> McCulloch-Pitts neurons<br />

could be used to build the machine head <strong>of</strong> a universal Turing machine; universal<br />

power was then achieved by providing this system with an external memory. “To<br />

psychology, however defined, specification <strong>of</strong> the net would contribute all that could<br />

be achieved in that field” (p. 131). More modern results have used the analog nature<br />

<strong>of</strong> modern processors to internalize the memory, indicating that an artificial neural<br />

network can simulate the entire Turing machine (Siegelmann, 1999; Siegelmann &<br />

Sontag, 1991, 1995).<br />

Modern associationist psychologists have been concerned about the implications<br />

<strong>of</strong> the terminal meta-postulate and have argued against it in an attempt<br />

to free their theories from its computational shackles (Anderson & Bower, 1973;<br />

Paivio, 1986). The hidden units <strong>of</strong> modern artificial neural networks break these<br />

shackles by capturing higher-order associations—associations between associations—that<br />

are not defined in a vocabulary restricted to input and output activities.<br />

The presence <strong>of</strong> hidden units provides enough power to modern networks to firmly<br />

plant them in the class “universal machine” and to make them viable alternatives to<br />

classical simulations.<br />

4.7 What Do Output Unit Activities Represent?<br />

When McCulloch and Pitts (1943) formalized the information processing <strong>of</strong> neurons,<br />

they did so by exploiting the all-or-none law. As a result, whether a neuron<br />

responded could be interpreted as assigning a “true” or “false” value to some proposition<br />

computed over the neuron’s outputs. McCulloch and Pitts were able to design<br />

artificial neurons capable <strong>of</strong> acting as 14 <strong>of</strong> the 16 possible primitive functions on<br />

the two-valued logic that was described in Chapter 2.<br />

McCulloch and Pitts (1943) formalized the all-or-none law by using the Heaviside<br />

step equation as the activation function for their artificial neurons. Modern activation<br />

functions such as the logistic equation provide a continuous approximation <strong>of</strong><br />

the step function. It is also quite common to interpret the logistic function in digital,<br />

step function terms. This is done by interpreting a modern unit as being “on” or “<strong>of</strong>f ”<br />

if its activity is sufficiently extreme. For instance, in simulations conducted with my<br />

laboratory s<strong>of</strong>tware (Dawson, 2005) it is typical to view a unit as being “on” if its activity<br />

is 0.9 or higher, or “<strong>of</strong>f” if its activity is 0.1 or lower.<br />

Digital activation functions, or digital interpretations <strong>of</strong> continuous activation<br />

functions, mean that pattern recognition is a primary task for artificial neural networks<br />

(Pao, 1989; Ripley, 1996). When a network performs pattern recognition, it<br />

is trained to generate a digital or binary response to an input pattern, where this<br />

152 Chapter 4

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!