17.01.2013 Views

Chapter 2. Prehension

Chapter 2. Prehension

Chapter 2. Prehension

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

396 A pp e It dic e s<br />

units. Also, we assume n input lines, each of which connects to all<br />

the output units. The input and outputs both consist of real numbers,<br />

although much of what’s said will apply to binary neurons as well.<br />

The topology is shown in Figure C.8. Note that there are n2<br />

connection weights, since each input connects to each output.<br />

Questions regarding this type of network include:<br />

1) given the patterns to be stored, how are the connection weights<br />

set to produce the desired behavior?<br />

2) what is the network’s capacity (i.e., how many input/output<br />

pairs can it ‘store’)?<br />

3) are there any restrictions on what the patterns can be?<br />

Inputs<br />

Figure C.8. Heteroassociative memory with four inputs and four<br />

outputs.<br />

Since each unit is a linear summer, its output is the dot product of<br />

its input and the vector of connection weights impinging on it; i.e. for<br />

unit i, the incoming weights are wij, j= 1 to n, and the output is:<br />

where I is the input vector and 0 is the output vector. If Wi is a row<br />

vector of weights for neuron i, then<br />

If W is the matrix of weights for the whole memory (i.e., the ith row<br />

is wi), then the behavior of the memory can be written

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!