MACHINE LEARNING TECHNIQUES - LASA
MACHINE LEARNING TECHNIQUES - LASA
MACHINE LEARNING TECHNIQUES - LASA
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
148<br />
6.8.1.1 Algorithm for Kohonen's Self Organizing Map<br />
Assume output nodes are connected in an array (usually 1 or 2 dimensional). Assume that the<br />
network is fully connected - all nodes in input layer are connected to all nodes in output layer.<br />
Use the competitive learning algorithm as follows:<br />
1. Initialize the weights – the initial weights may be random values or values<br />
assigned to cover the space of inputs.<br />
2. Present an input x r to the net<br />
3. For each output node j , calculate the distance between the input x r and its<br />
weight vector<br />
j<br />
w r , e.g. using the Euclidean distance<br />
n<br />
r r j r r j j<br />
, = − = ∑ i<br />
−<br />
i<br />
i=<br />
1<br />
( )<br />
d x w x w x w<br />
(6.57)<br />
4. Determine the "winning" output node i , for which d is minimal.<br />
r r r r<br />
i<br />
k<br />
w −x ≤ w −x ∀k<br />
(6.58)<br />
Note: the above equation is equivalent to w i x >= w k x only if the weights are<br />
normalized.<br />
5. Update the weights of the winner node and of nodes in a neighborhood using<br />
the rule:<br />
Δ r = ⋅ ⋅ r −<br />
r<br />
( )<br />
i<br />
( ) η ( ) ( ) ( )<br />
k<br />
w t t hki<br />
,<br />
t x w t<br />
where η ( t) ∈ [0,1] is a learning rate or gain and h ( t)<br />
ki ,<br />
r<br />
ki ,<br />
1<br />
( t)<br />
(6.59)<br />
= is the<br />
neighborhood function. It is equal to 1 when i = k and falls off with the distance<br />
r between output nodes i and k. Thus, the closer are the units to the winner,<br />
ki<br />
the larger the update. Note that both the learning rate and the neighborhood vary<br />
with time. It is here that the topological information is supplied. Nearby units<br />
receive similar updates and thus end up responding to nearby input patterns. The<br />
i<br />
above rule drags the weight vector w and the weights of nearby units towards<br />
the input x.<br />
Note that if the output nodes are 1, the Kohonen rule is equivalent to the Outstar<br />
rule, see Section 6.6.2.<br />
Δ w = α⋅x ⋅y −γ<br />
⋅ w<br />
(6.60)<br />
ij i j ij<br />
© A.G.Billard 2004 – Last Update March 2011