MACHINE LEARNING TECHNIQUES - LASA
MACHINE LEARNING TECHNIQUES - LASA MACHINE LEARNING TECHNIQUES - LASA
134 6.6 Hebbian Learning Hebbian learning is the core of unsupervised learning techniques in neural networks. It takes its name from the original postulate of the neurobiologist Donald Hebb (Hebb, 1949) stating that: When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased. The Hebbian learning rule lets the weights across two units grow as a function of the coactivation of the input and output units. If we consider the classical perceptron with no noise: y ( w x ) = ∑ (6.20) i ij j j Then, the weights increase following: Δ w = α ⋅x ⋅ y (6.21) ji j i α is the learning rate and is usually comprised between [ 0,1 ]. It determines the speed at which the weights grow. If x and y are binary inputs, the weights increase only when both x and y are 1. Note that, in the discrete case, the co-activation must be simultaneous. This is often too strong a constraint in a real-time system which displays large variation in the temporality of concurrent events. A continuous time neural network would best represent such a system. In the continuous case, we would have: ( ) α ( ) ( ) ( ) ∂ w t = t ⋅x t ⋅ y t ji j i t2 ( ) α ( ) ( ) ( ) ∫ Δ w t = t ⋅x t ⋅y t dt ji t j i 1 which corresponds to the area of superposed coactivation of the two neurons in the time interval [t 1 t 2 ] . One can show that Δw ij = ∑ wxx and in the limit ik k j t 0 Δt k d W t dt ( ) C W ( t ) Δ ⎯⎯⎯⎯→ is equivalent to ∝ ⋅ (6.22) where C ij is the correlation coefficient calculated over all input patterns between the i th and j th term of the inputs and W(t) is the matrix of weights at time t. The major drawback of the Hebbian learning rule, as stated in Equation (6.21), is that weights grow continuously and without bounds. This can quickly get out of hand. If learning is to be continuous, the values taken by the weights can quickly go over the floating-point margin of your system. We will next review two major ways of limiting the growth of the weights. © A.G.Billard 2004 – Last Update March 2011
135 6.6.1 Weights bounds One of the easiest way to bound weights is to fix a margin for the weights after which they are no longer incremented, see Figure 6-9. This has the effect of stopping the learning once the upper bound is reached. Another option is to renormalize the weights whenever they reach the upper bound, i.e. If wij = w , then w max ij wij = ∀ i, j w max Such a renormalization would give more importance to events occurring after the normalization, unless one rescales the increment factorα by the same amount, i.e. 1 w . max Figure 6-9: Weight clipping This, however, will quickly lead the network to reach the limit of its floating point encoding for α . Another option is to renormalize at each time step, while conserving the length of the complete weight vector. The algorithm is performed in two steps: w = w +Δw w j j j j = w w j j This has the advantage to preserve one of the directions along which the weight matrix codes. However, it makes the whole computation heavier. Moreover, it has a tendency of not conserving the relative importance across weight vectors. Figure 6-10: Weight renormalization © A.G.Billard 2004 – Last Update March 2011
- Page 83 and 84: 83 additional ridge parameter induc
- Page 85 and 86: 85 Figure 5-3: TOP: Marginal (left)
- Page 87 and 88: 87 statistical independence. We def
- Page 89 and 90: 89 J j ( µ 1,...., µ K) = ∑∑
- Page 91 and 92: 91 A simple pattern recognition alg
- Page 93 and 94: 93 ( ) ( , ) f x = sign w x + b (5.
- Page 95 and 96: 95 Figure 5-6: A binary classificat
- Page 97 and 98: 97 where N is the number of support
- Page 99 and 100: 99 5.8 Support Vector Regression In
- Page 101 and 102: 101 The optimization problem given
- Page 103 and 104: 103 Note that since we never have t
- Page 105 and 106: 105 Figure 5-13: Effect of the kern
- Page 107 and 108: 107 To better understand the effect
- Page 109 and 110: 109 5.9 Gaussian Process Regression
- Page 111 and 112: 111 One can then use the above expr
- Page 113 and 114: 113 5.9.2 Equivalence of Gaussian P
- Page 115 and 116: 115 5.9.3 Curse of dimensionality,
- Page 117 and 118: 117 The weight w determines the slo
- Page 119 and 120: 119 Figure 5-21: Example of success
- Page 121 and 122: 121 • its performance tends to de
- Page 123 and 124: 123 neurons. Furthermore, they lear
- Page 125 and 126: 125 The sigmoid f x ( x) ( ) = tanh
- Page 127 and 128: 127 6.3.2 Information Theory and th
- Page 129 and 130: 129 ( R) ⎛⎛det ⎞⎞ I( x, y)
- Page 131 and 132: 131 y= ∑ w x ) of Because of the
- Page 133: 133 6.5 Willshaw net David Willshaw
- Page 137 and 138: 137 Figure 6-11: The weight vector
- Page 139 and 140: 139 6.6.4 Oja’s one Neuron Model
- Page 141 and 142: 141 If y i and y are highly correla
- Page 143 and 144: 143 Foldiak’s second model allows
- Page 145 and 146: 145 ∂ ∂ J 1 = fy − 1 2 λ 1yf
- Page 147 and 148: 147 6.8 The Self-Organizing Map (SO
- Page 149 and 150: 149 6. Decrease the size of the nei
- Page 151 and 152: 151 the resulting distribution is a
- Page 153 and 154: 153 To simplify the description of
- Page 155 and 156: 155 C µν −1 where ( ) is the µ
- Page 157 and 158: 157 The continuous time Hopfield ne
- Page 159 and 160: 159 ∂f If the slope is negative,
- Page 161 and 162: 161 7.2 Hidden Markov Models Hidden
- Page 163 and 164: 163 Figure 7-2: Schematic illustrat
- Page 165 and 166: 165 these two quantities to compute
- Page 167 and 168: 167 7.2.4 Decoding an HMM There are
- Page 169 and 170: 169 7.2.5 Further Readings Rabiner,
- Page 171 and 172: 171 7.3.1 Principle In reinforcemen
- Page 173 and 174: 173 general, acting to maximize imm
- Page 175 and 176: 175 of reinforcement learning makes
- Page 177 and 178: 177 8 Genetic Algorithms We conclud
- Page 179 and 180: 179 However, you must define geneti
- Page 181 and 182: 181 Often the crossover operator an
- Page 183 and 184: 183 ( A λI) x 0 − = (8.5) where
135<br />
6.6.1 Weights bounds<br />
One of the easiest way to bound weights is to fix a margin for the weights after which they are no<br />
longer incremented, see Figure 6-9. This has the effect of stopping the learning once the upper<br />
bound is reached. Another option is to renormalize the weights whenever they reach the upper<br />
bound, i.e.<br />
If<br />
wij<br />
= w , then w<br />
max<br />
ij<br />
wij<br />
= ∀ i,<br />
j<br />
w<br />
max<br />
Such a renormalization would give more importance to events occurring after the normalization,<br />
unless one rescales the increment factorα by the same amount, i.e.<br />
1<br />
w .<br />
max<br />
Figure 6-9: Weight clipping<br />
This, however, will quickly lead the network to reach the limit of its floating point encoding for α .<br />
Another option is to renormalize at each time step, while conserving the length of the complete<br />
weight vector. The algorithm is performed in two steps:<br />
w = w +Δw<br />
w<br />
j j j<br />
j<br />
=<br />
w<br />
w<br />
j<br />
j<br />
This has the advantage to preserve one of the directions along which the weight matrix codes.<br />
However, it makes the whole computation heavier. Moreover, it has a tendency of not conserving<br />
the relative importance across weight vectors.<br />
Figure 6-10: Weight renormalization<br />
© A.G.Billard 2004 – Last Update March 2011