17.01.2013 Views

Chapter 2. Prehension

Chapter 2. Prehension

Chapter 2. Prehension

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Appendix C - Computational Neural Modelling 3 8 5<br />

Figure C.l McCulloch-Pitts neuron vs Leaky Integrator.<br />

Activation is on the X axis and output is on the Y axis. For the<br />

leaky integrator, several values of t are shown. See text for<br />

details<br />

u(t+l) = u(t) + du = u(t) + (-u(t) + I) = I (2)<br />

which is precisely the McCulloch-Pitts unit. The advantage of using<br />

the leaky integrator model is that it provides a continuous time model<br />

of the neural network.<br />

Each neuron has a state, or activation level. An activation function<br />

is a deterministic function that computes a neuron’s state as a function<br />

of the neuron’s input. The strength of the connection from one neuron<br />

to another can be attenuated or enhanced through the use of a weight,<br />

which represents the synaptic strength. Weights on the inputs adjust<br />

the influence of the input. The activation function for a McCulloch-<br />

Pitts neuron is:<br />

where the activity of each input into neuron i is multiplied by an<br />

associated weight wij (the strength of connection between neuron i<br />

and neuron j) and these products are summed to produce the activation<br />

of that neuron. Positive weights represent excitatory connections;<br />

negative weights inhibitory ones. The state of activation is actually<br />

time-varying (i.e., ai(t)), so that the state of the system can be<br />

represented at any time t. An activation of zero usually means that the<br />

neuron is inactive. Activation functions can be discrete (mapping the<br />

inputs to a binary value or a small and limited set of values) or

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!