MACHINE LEARNING TECHNIQUES - LASA
MACHINE LEARNING TECHNIQUES - LASA
MACHINE LEARNING TECHNIQUES - LASA
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
141<br />
If<br />
y<br />
i<br />
and<br />
y are highly correlated, then, the weights between them will grow to a large negative<br />
j<br />
value and each will tend to turn the other off. Indeed, we have:<br />
( Δwij →0 ) ⇒ ( < yi , yj<br />
> → 0)<br />
The weight change stops when the two outputs are decorrelated. At this stage, the algorithm<br />
converges. Note that there is no need for weight decay or renormalizing on anti-Hebbian weights,<br />
as they are automatically self-limiting.<br />
6.7.1 Foldiak’s models<br />
Foldiak has suggested several models combining anti-Hebbian learning and weight decay. Here,<br />
we will consider the first 2 models as examples of solely anti-Hebbian learning. The first model is<br />
shown in Figure 6-12 and has anti-Hebbian connections between the output neurons.<br />
Figure 6-12: Foldiak's 1st model<br />
The equations, which define its dynamical behavior, are<br />
with learning rule<br />
y x w y<br />
i i ij j<br />
j=<br />
1<br />
n<br />
= +∑ (6.38)<br />
Δ w =−α<br />
⋅y ⋅y for i≠ j<br />
(6.39)<br />
ij i j<br />
In matrix terms, we have<br />
And so,<br />
y= x+ W⋅y<br />
( ) −1<br />
y= I −W ⋅x<br />
(6.40)<br />
Therefore, we can view the system as a transformation, T, from the input vector x to the output<br />
y given by:<br />
( ) 1<br />
−<br />
y= T⋅ x= I−W ⋅ x<br />
(6.41)<br />
Now, the matrix W must be symmetric. It has only non-zero non-diagonal terms, i.e. if we<br />
consider only a two input, two output net as in the diagram.<br />
W<br />
⎛⎛0<br />
w⎞⎞<br />
= ⎜⎜ ⎟⎟<br />
⎝⎝ w 0⎠⎠<br />
© A.G.Billard 2004 – Last Update March 2011