01.11.2014 Views

MACHINE LEARNING TECHNIQUES - LASA

MACHINE LEARNING TECHNIQUES - LASA

MACHINE LEARNING TECHNIQUES - LASA

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

130<br />

6.4.1 The Adaline<br />

The Adaline is a simple one-layer feed-forward neural network, composed of perceptron units.<br />

Figure 6-4: one-layer neural network<br />

p<br />

Let P input patterns X , p 1,..., P<br />

= be the patterns you want to train the network with.<br />

p<br />

real output of the network when presented with each X , and<br />

network. One can compute an error measure over all patterns:<br />

1<br />

E = E = z −y<br />

2<br />

p<br />

y is the<br />

p<br />

z the desired output for the<br />

P<br />

P<br />

p p p<br />

∑ ∑ ( ) 2<br />

(6.12)<br />

p= 1 p=<br />

1<br />

In order to minimize the error, one can determine the gradient of the error with respect to the<br />

weights and move the weights in opposite direction.<br />

E<br />

Δ wj<br />

=−γ ∂<br />

(6.13)<br />

∂ w<br />

j<br />

Figure 6-5: A schematic diagram showing the principle of error descent.<br />

If the gradient is positive, changing the weights in a positive direction would increase the error.<br />

Therefore, we change the weights in a negative direction. Conversely, if the gradient is negative,<br />

one must change the weight in a positive direction to decrease the error.<br />

© A.G.Billard 2004 – Last Update March 2011

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!