01.11.2014 Views

MACHINE LEARNING TECHNIQUES - LASA

MACHINE LEARNING TECHNIQUES - LASA

MACHINE LEARNING TECHNIQUES - LASA

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

131<br />

y= ∑ w x ) of<br />

Because of the linearity of the activation function of the Perceptron neuron (i.e.<br />

j j<br />

j<br />

the Adaline network, we have:<br />

∂E<br />

∂y<br />

∂y<br />

∂w<br />

p<br />

p<br />

p<br />

j<br />

= x<br />

p<br />

j<br />

p p<br />

( z y )<br />

=− −<br />

(6.14)<br />

(6.15)<br />

( )<br />

p p p<br />

Δ<br />

pwj<br />

= γ z −y ⋅ x<br />

(6.16)<br />

This has proven to be a most powerful rule and is at the core of almost all current supervised<br />

learning methods for ANN. But, it should be emphasized that nothing we have written guarantees<br />

that the method will cause the weight to converge. It can be proved that the method will give an<br />

optimal (in a least square error sense) approximation of the function being modeled. However,<br />

the method does not ensure convergence to a global optimum.<br />

6.4.2 The Backpropagation Network<br />

An example of multi-layered Perceptron, or feed-forward neural network, is shown in Figure 6-6<br />

Activity in the network is propagated forwards via a first set of weights from the input layer to the<br />

hidden layer, and, then, via a second set of weights from hidden layer to output layer. The error is<br />

calculated by Equation (6.12), similarly to what was done for the Adaline network. Now two sets<br />

of weights must be calculated.<br />

Here, however, one does not have access to the desired output of the hidden units. This is<br />

referred to, as the Credit assignment problem – in that we must assign how much effect each<br />

weight in the first layer of weights has on the final output of the network. In order to compute the<br />

weight change, we need to propagate backwards, to backpropagate, the error across the two<br />

layers. The algorithm is quite general and applies to any number of hidden layers.<br />

An example of multilayered perceptron is shown in<br />

Figure 6-6: Multi-layered feed-forward NN<br />

© A.G.Billard 2004 – Last Update March 2011

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!