17.01.2013 Views

Chapter 2. Prehension

Chapter 2. Prehension

Chapter 2. Prehension

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Appendix C - Computational Neural Modelling 383<br />

4. Assemblage Models. To capture the essence of behavioral<br />

data, a conceptual coordinated control program model (Arbib,<br />

198 1) can be written to describe the activation of schemas (units<br />

of motor control and interactions with the environment). A<br />

schema can be modelled as many networks (a network<br />

assemblage) passing control information (activation lines) and<br />

data (e.g., target location). This conceptual model can be<br />

implemented in a language such as RS (Lyons & Arbib, 1989).<br />

Computational network models that involve parallel distributed<br />

processing (Rumelhart et al., 1986b) have been called connectionist<br />

models (Feldman & Ballard, 1982) or artificial neural networks. The<br />

key ingredient to this type of information processing is that it involves<br />

the interactions of a large number of simple processing elements that<br />

can either inhibit or excite each other. Thus they honor very general<br />

neurobiological constraints, but using simplifying assumptions, are<br />

motivated by cognitive phenomena and are governed primarly by<br />

computational constraints (Churchland & Sejnowski, 1988). Each<br />

element can be a simple model of a neuron, as in the third example<br />

above, or it in itself can be a network of neurons, as in the network<br />

assemblage example. Control theoretic models, while not usually<br />

involving the interaction of a large number of elements, are useful for<br />

bringing to bear the tools of modem control theory for information<br />

processing, as was seen in <strong>Chapter</strong> 5. Models of individual neurons<br />

are not discussed in this book, since we are more interested in how<br />

networks of neurons can perform computations.<br />

Parallel distributed processing is neurally-inspired, due to two<br />

major factors (Rumelhart et al., 1986a). The first factor is time:<br />

neurons are slow and yet can solve problems dealing with a large<br />

number of simultaneous constraints in a relatively short time-period.<br />

Feldman (1985) called this the ‘100 step program constraint’, since<br />

one second of processing by neurons that work at the millisecond<br />

range would take about 100 time-steps. The second reason is that the<br />

knowledge being represented is in the connections, and not in ‘storage<br />

cells’ as in conventional computers. If it is an adaptive model, a<br />

distributed representation will allow the network to learn key<br />

relationships between the inputs and outputs. Memory is constructed<br />

as patterns of activity, and knowledge is not stored but created or<br />

recreated each time an input is received. Importantly, as more<br />

knowledge is added, weights are changed very slightly; otherwise,<br />

existing knowledge will be wiped out. This leads to various<br />

constraints on processing capabilities, as described in the next section.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!