MACHINE LEARNING TECHNIQUES - LASA

MACHINE LEARNING TECHNIQUES - LASA MACHINE LEARNING TECHNIQUES - LASA

01.11.2014 Views

142 so that T is given by: − ( ) 1 ⎛⎛1 - w⎞⎞ 1 ⎛⎛1 w⎞⎞ T = I − W = ⎜⎜ ⎟⎟= 2 ⎜⎜ ⎟⎟ ⎝⎝- w 1 ⎠⎠ 1− w ⎝⎝w 1 ⎠⎠ (6.42) Now, let the two dimensional input vector have correlation matrix C xx 2 ⎛⎛ σ1 ρσ1σ ⎞⎞ 2 = ⎜⎜ ⎜⎜ ⎟⎟ 2 ρσ1σ 2 σ ⎟⎟ ⎝⎝ 2 ⎠⎠ where ρ is the correlation coefficient and σ1, σ 2 the variance of the elements of x. Now the correlation matrix for y can be calculated. Since y= T⋅ x , we have: Then, yy T T { } xx . T { } ( ) C = E yy = E T⋅x⋅ T⋅ x = T⋅C ⋅ T C yy = 1 2 ( w −1) ⎛⎛ ⎜⎜ ⎜⎜ ⎝⎝ ( ) ( ) 2 2 2 2 2 2 σ1 + 2 wρσσ 1 2 + wσ2 ρσσ 1 2 w + 1 + σ1 + σ2 w⎞⎞ ⎟⎟ ( ) ( ) ⎟⎟ ⎠⎠ 2 2 2 2 2 2 2 ρσ1σ 2 w + 1 + σ1 + σ 2 w σ 2 + 2wρσ1σ 2 + wσ1 (6.43) The anti-Hebbs rule reaches equilibrium when the units are decorrelated and so the terms w 12 w21 0 sove). = = . Notice that this gives us a quadratic equation in w (which naturally we can Let us consider the special case that the elements of x have the same variance so that = = . Then, the cross correlation terms become ( w 1) ( 2 ) σ σ σ 1 2 we must solve the quadratic equation: ρw 2 ρσ 2 2 + + σ 2 w+ ρσ 2 and so + 2w+ ρ = 0 (6.44) which has a zero at w f 2 − 1+ 1−ρ = (6.45) ρ One can further show that this is a stable point in the weight space. © A.G.Billard 2004 – Last Update March 2011

143 Foldiak’s second model allows all neurons to receive their own outputs with weight 1. ( 1 y y ) Δ w = α − (6.46) ii i i which can be written in matrix form as T ( ) Δ W = α I− YY (6.47) where I is the identity matrix. This network will converge when the outputs are decorrelated (due to the off-diagonal anti- Hebbian learning) and when the expected variance of the outputs is equal to 1. i.e. this learning rule forces each network output to take responsibility for the same amount of information since the entropy of each output is the same. This is generalizable to ( y y ) Δ w = αθ− (6.48) ij ij i j Where θ ij = 0 for i ≠ j. The value of θ ii for all i , will determine the variance on that output and so we can manage the information output of each neuron. 6.7.2 CCA Revisited Adapted from Peiling Lai and Colin Fyfe, Kernel and Nonlinear Canonical Correlation Analysis, Computing and Information Systems, 7 (2000) p. 43-49. The Canonical Correlation Network Figure 1 The CCA Network. By adjusting weights, w 1 and w 2 , we maximize correlation between y 1 and y 2 . Let us consider CCA in artificial neural network terms. The input data comprises two vectors x 1 and x 2 . Activation is fed forward from each input to the corresponding output through the respective weights, w 1 and w 2 (see Figure 1 and equations (1) and (2)) to give outputs y 1 and y 2 . One can derive an objective function for the maximization of this correlation under the constraint that the variance of y 1 and y 2 should be 1 as: © A.G.Billard 2004 – Last Update March 2011

142<br />

so that T is given by:<br />

−<br />

( ) 1 ⎛⎛1 - w⎞⎞ 1 ⎛⎛1<br />

w⎞⎞<br />

T = I − W = ⎜⎜ ⎟⎟=<br />

2 ⎜⎜ ⎟⎟<br />

⎝⎝- w 1 ⎠⎠ 1−<br />

w ⎝⎝w<br />

1 ⎠⎠<br />

(6.42)<br />

Now, let the two dimensional input vector have correlation matrix<br />

C xx<br />

2<br />

⎛⎛ σ1 ρσ1σ<br />

⎞⎞<br />

2<br />

= ⎜⎜<br />

⎜⎜<br />

⎟⎟<br />

2<br />

ρσ1σ<br />

2<br />

σ ⎟⎟<br />

⎝⎝<br />

2 ⎠⎠<br />

where ρ is the correlation coefficient and σ1,<br />

σ<br />

2<br />

the variance of the elements of x. Now the<br />

correlation matrix for y can be calculated. Since y= T⋅ x , we have:<br />

Then,<br />

yy<br />

T<br />

T<br />

{ } xx<br />

.<br />

T<br />

{ } ( )<br />

C = E yy = E T⋅x⋅ T⋅ x = T⋅C ⋅ T<br />

C<br />

yy<br />

=<br />

1<br />

2<br />

( w −1)<br />

⎛⎛<br />

⎜⎜<br />

⎜⎜<br />

⎝⎝<br />

( ) ( )<br />

2 2 2 2 2 2<br />

σ1 + 2 wρσσ 1 2<br />

+ wσ2 ρσσ<br />

1 2<br />

w + 1 + σ1 + σ2<br />

w⎞⎞<br />

⎟⎟<br />

( ) ( )<br />

⎟⎟<br />

⎠⎠<br />

2 2 2 2 2 2 2<br />

ρσ1σ 2<br />

w + 1 + σ1 + σ<br />

2<br />

w σ<br />

2<br />

+ 2wρσ1σ 2<br />

+ wσ1<br />

(6.43)<br />

The anti-Hebbs rule reaches equilibrium when the units are decorrelated and so the terms<br />

w<br />

12<br />

w21 0<br />

sove).<br />

= = . Notice that this gives us a quadratic equation in w (which naturally we can<br />

Let us consider the special case that the elements of x have the same variance so that<br />

= = . Then, the cross correlation terms become ( w 1) ( 2 )<br />

σ σ σ<br />

1 2<br />

we must solve the quadratic equation:<br />

ρw<br />

2<br />

ρσ 2 2 + + σ 2 w+ ρσ<br />

2 and so<br />

+ 2w+ ρ = 0<br />

(6.44)<br />

which has a zero at<br />

w f<br />

2<br />

− 1+<br />

1−ρ<br />

= (6.45)<br />

ρ<br />

One can further show that this is a stable point in the weight space.<br />

© A.G.Billard 2004 – Last Update March 2011

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!