MACHINE LEARNING TECHNIQUES - LASA
MACHINE LEARNING TECHNIQUES - LASA MACHINE LEARNING TECHNIQUES - LASA
142 so that T is given by: − ( ) 1 ⎛⎛1 - w⎞⎞ 1 ⎛⎛1 w⎞⎞ T = I − W = ⎜⎜ ⎟⎟= 2 ⎜⎜ ⎟⎟ ⎝⎝- w 1 ⎠⎠ 1− w ⎝⎝w 1 ⎠⎠ (6.42) Now, let the two dimensional input vector have correlation matrix C xx 2 ⎛⎛ σ1 ρσ1σ ⎞⎞ 2 = ⎜⎜ ⎜⎜ ⎟⎟ 2 ρσ1σ 2 σ ⎟⎟ ⎝⎝ 2 ⎠⎠ where ρ is the correlation coefficient and σ1, σ 2 the variance of the elements of x. Now the correlation matrix for y can be calculated. Since y= T⋅ x , we have: Then, yy T T { } xx . T { } ( ) C = E yy = E T⋅x⋅ T⋅ x = T⋅C ⋅ T C yy = 1 2 ( w −1) ⎛⎛ ⎜⎜ ⎜⎜ ⎝⎝ ( ) ( ) 2 2 2 2 2 2 σ1 + 2 wρσσ 1 2 + wσ2 ρσσ 1 2 w + 1 + σ1 + σ2 w⎞⎞ ⎟⎟ ( ) ( ) ⎟⎟ ⎠⎠ 2 2 2 2 2 2 2 ρσ1σ 2 w + 1 + σ1 + σ 2 w σ 2 + 2wρσ1σ 2 + wσ1 (6.43) The anti-Hebbs rule reaches equilibrium when the units are decorrelated and so the terms w 12 w21 0 sove). = = . Notice that this gives us a quadratic equation in w (which naturally we can Let us consider the special case that the elements of x have the same variance so that = = . Then, the cross correlation terms become ( w 1) ( 2 ) σ σ σ 1 2 we must solve the quadratic equation: ρw 2 ρσ 2 2 + + σ 2 w+ ρσ 2 and so + 2w+ ρ = 0 (6.44) which has a zero at w f 2 − 1+ 1−ρ = (6.45) ρ One can further show that this is a stable point in the weight space. © A.G.Billard 2004 – Last Update March 2011
143 Foldiak’s second model allows all neurons to receive their own outputs with weight 1. ( 1 y y ) Δ w = α − (6.46) ii i i which can be written in matrix form as T ( ) Δ W = α I− YY (6.47) where I is the identity matrix. This network will converge when the outputs are decorrelated (due to the off-diagonal anti- Hebbian learning) and when the expected variance of the outputs is equal to 1. i.e. this learning rule forces each network output to take responsibility for the same amount of information since the entropy of each output is the same. This is generalizable to ( y y ) Δ w = αθ− (6.48) ij ij i j Where θ ij = 0 for i ≠ j. The value of θ ii for all i , will determine the variance on that output and so we can manage the information output of each neuron. 6.7.2 CCA Revisited Adapted from Peiling Lai and Colin Fyfe, Kernel and Nonlinear Canonical Correlation Analysis, Computing and Information Systems, 7 (2000) p. 43-49. The Canonical Correlation Network Figure 1 The CCA Network. By adjusting weights, w 1 and w 2 , we maximize correlation between y 1 and y 2 . Let us consider CCA in artificial neural network terms. The input data comprises two vectors x 1 and x 2 . Activation is fed forward from each input to the corresponding output through the respective weights, w 1 and w 2 (see Figure 1 and equations (1) and (2)) to give outputs y 1 and y 2 . One can derive an objective function for the maximization of this correlation under the constraint that the variance of y 1 and y 2 should be 1 as: © A.G.Billard 2004 – Last Update March 2011
- Page 91 and 92: 91 A simple pattern recognition alg
- Page 93 and 94: 93 ( ) ( , ) f x = sign w x + b (5.
- Page 95 and 96: 95 Figure 5-6: A binary classificat
- Page 97 and 98: 97 where N is the number of support
- Page 99 and 100: 99 5.8 Support Vector Regression In
- Page 101 and 102: 101 The optimization problem given
- Page 103 and 104: 103 Note that since we never have t
- Page 105 and 106: 105 Figure 5-13: Effect of the kern
- Page 107 and 108: 107 To better understand the effect
- Page 109 and 110: 109 5.9 Gaussian Process Regression
- Page 111 and 112: 111 One can then use the above expr
- Page 113 and 114: 113 5.9.2 Equivalence of Gaussian P
- Page 115 and 116: 115 5.9.3 Curse of dimensionality,
- Page 117 and 118: 117 The weight w determines the slo
- Page 119 and 120: 119 Figure 5-21: Example of success
- Page 121 and 122: 121 • its performance tends to de
- Page 123 and 124: 123 neurons. Furthermore, they lear
- Page 125 and 126: 125 The sigmoid f x ( x) ( ) = tanh
- Page 127 and 128: 127 6.3.2 Information Theory and th
- Page 129 and 130: 129 ( R) ⎛⎛det ⎞⎞ I( x, y)
- Page 131 and 132: 131 y= ∑ w x ) of Because of the
- Page 133 and 134: 133 6.5 Willshaw net David Willshaw
- Page 135 and 136: 135 6.6.1 Weights bounds One of the
- Page 137 and 138: 137 Figure 6-11: The weight vector
- Page 139 and 140: 139 6.6.4 Oja’s one Neuron Model
- Page 141: 141 If y i and y are highly correla
- Page 145 and 146: 145 ∂ ∂ J 1 = fy − 1 2 λ 1yf
- Page 147 and 148: 147 6.8 The Self-Organizing Map (SO
- Page 149 and 150: 149 6. Decrease the size of the nei
- Page 151 and 152: 151 the resulting distribution is a
- Page 153 and 154: 153 To simplify the description of
- Page 155 and 156: 155 C µν −1 where ( ) is the µ
- Page 157 and 158: 157 The continuous time Hopfield ne
- Page 159 and 160: 159 ∂f If the slope is negative,
- Page 161 and 162: 161 7.2 Hidden Markov Models Hidden
- Page 163 and 164: 163 Figure 7-2: Schematic illustrat
- Page 165 and 166: 165 these two quantities to compute
- Page 167 and 168: 167 7.2.4 Decoding an HMM There are
- Page 169 and 170: 169 7.2.5 Further Readings Rabiner,
- Page 171 and 172: 171 7.3.1 Principle In reinforcemen
- Page 173 and 174: 173 general, acting to maximize imm
- Page 175 and 176: 175 of reinforcement learning makes
- Page 177 and 178: 177 8 Genetic Algorithms We conclud
- Page 179 and 180: 179 However, you must define geneti
- Page 181 and 182: 181 Often the crossover operator an
- Page 183 and 184: 183 ( A λI) x 0 − = (8.5) where
- Page 185 and 186: 185 Joint probability: The joint pr
- Page 187 and 188: 187 The two most classical distribu
- Page 189 and 190: 189 9.2.7 Statistical Independence
- Page 191 and 192: 191 1 1 1 1 h x ∫ a b a b 0 a a
142<br />
so that T is given by:<br />
−<br />
( ) 1 ⎛⎛1 - w⎞⎞ 1 ⎛⎛1<br />
w⎞⎞<br />
T = I − W = ⎜⎜ ⎟⎟=<br />
2 ⎜⎜ ⎟⎟<br />
⎝⎝- w 1 ⎠⎠ 1−<br />
w ⎝⎝w<br />
1 ⎠⎠<br />
(6.42)<br />
Now, let the two dimensional input vector have correlation matrix<br />
C xx<br />
2<br />
⎛⎛ σ1 ρσ1σ<br />
⎞⎞<br />
2<br />
= ⎜⎜<br />
⎜⎜<br />
⎟⎟<br />
2<br />
ρσ1σ<br />
2<br />
σ ⎟⎟<br />
⎝⎝<br />
2 ⎠⎠<br />
where ρ is the correlation coefficient and σ1,<br />
σ<br />
2<br />
the variance of the elements of x. Now the<br />
correlation matrix for y can be calculated. Since y= T⋅ x , we have:<br />
Then,<br />
yy<br />
T<br />
T<br />
{ } xx<br />
.<br />
T<br />
{ } ( )<br />
C = E yy = E T⋅x⋅ T⋅ x = T⋅C ⋅ T<br />
C<br />
yy<br />
=<br />
1<br />
2<br />
( w −1)<br />
⎛⎛<br />
⎜⎜<br />
⎜⎜<br />
⎝⎝<br />
( ) ( )<br />
2 2 2 2 2 2<br />
σ1 + 2 wρσσ 1 2<br />
+ wσ2 ρσσ<br />
1 2<br />
w + 1 + σ1 + σ2<br />
w⎞⎞<br />
⎟⎟<br />
( ) ( )<br />
⎟⎟<br />
⎠⎠<br />
2 2 2 2 2 2 2<br />
ρσ1σ 2<br />
w + 1 + σ1 + σ<br />
2<br />
w σ<br />
2<br />
+ 2wρσ1σ 2<br />
+ wσ1<br />
(6.43)<br />
The anti-Hebbs rule reaches equilibrium when the units are decorrelated and so the terms<br />
w<br />
12<br />
w21 0<br />
sove).<br />
= = . Notice that this gives us a quadratic equation in w (which naturally we can<br />
Let us consider the special case that the elements of x have the same variance so that<br />
= = . Then, the cross correlation terms become ( w 1) ( 2 )<br />
σ σ σ<br />
1 2<br />
we must solve the quadratic equation:<br />
ρw<br />
2<br />
ρσ 2 2 + + σ 2 w+ ρσ<br />
2 and so<br />
+ 2w+ ρ = 0<br />
(6.44)<br />
which has a zero at<br />
w f<br />
2<br />
− 1+<br />
1−ρ<br />
= (6.45)<br />
ρ<br />
One can further show that this is a stable point in the weight space.<br />
© A.G.Billard 2004 – Last Update March 2011