MACHINE LEARNING TECHNIQUES - LASA

MACHINE LEARNING TECHNIQUES - LASA MACHINE LEARNING TECHNIQUES - LASA

01.11.2014 Views

134 6.6 Hebbian Learning Hebbian learning is the core of unsupervised learning techniques in neural networks. It takes its name from the original postulate of the neurobiologist Donald Hebb (Hebb, 1949) stating that: When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased. The Hebbian learning rule lets the weights across two units grow as a function of the coactivation of the input and output units. If we consider the classical perceptron with no noise: y ( w x ) = ∑ (6.20) i ij j j Then, the weights increase following: Δ w = α ⋅x ⋅ y (6.21) ji j i α is the learning rate and is usually comprised between [ 0,1 ]. It determines the speed at which the weights grow. If x and y are binary inputs, the weights increase only when both x and y are 1. Note that, in the discrete case, the co-activation must be simultaneous. This is often too strong a constraint in a real-time system which displays large variation in the temporality of concurrent events. A continuous time neural network would best represent such a system. In the continuous case, we would have: ( ) α ( ) ( ) ( ) ∂ w t = t ⋅x t ⋅ y t ji j i t2 ( ) α ( ) ( ) ( ) ∫ Δ w t = t ⋅x t ⋅y t dt ji t j i 1 which corresponds to the area of superposed coactivation of the two neurons in the time interval [t 1 t 2 ] . One can show that Δw ij = ∑ wxx and in the limit ik k j t 0 Δt k d W t dt ( ) C W ( t ) Δ ⎯⎯⎯⎯→ is equivalent to ∝ ⋅ (6.22) where C ij is the correlation coefficient calculated over all input patterns between the i th and j th term of the inputs and W(t) is the matrix of weights at time t. The major drawback of the Hebbian learning rule, as stated in Equation (6.21), is that weights grow continuously and without bounds. This can quickly get out of hand. If learning is to be continuous, the values taken by the weights can quickly go over the floating-point margin of your system. We will next review two major ways of limiting the growth of the weights. © A.G.Billard 2004 – Last Update March 2011

135 6.6.1 Weights bounds One of the easiest way to bound weights is to fix a margin for the weights after which they are no longer incremented, see Figure 6-9. This has the effect of stopping the learning once the upper bound is reached. Another option is to renormalize the weights whenever they reach the upper bound, i.e. If wij = w , then w max ij wij = ∀ i, j w max Such a renormalization would give more importance to events occurring after the normalization, unless one rescales the increment factorα by the same amount, i.e. 1 w . max Figure 6-9: Weight clipping This, however, will quickly lead the network to reach the limit of its floating point encoding for α . Another option is to renormalize at each time step, while conserving the length of the complete weight vector. The algorithm is performed in two steps: w = w +Δw w j j j j = w w j j This has the advantage to preserve one of the directions along which the weight matrix codes. However, it makes the whole computation heavier. Moreover, it has a tendency of not conserving the relative importance across weight vectors. Figure 6-10: Weight renormalization © A.G.Billard 2004 – Last Update March 2011

135<br />

6.6.1 Weights bounds<br />

One of the easiest way to bound weights is to fix a margin for the weights after which they are no<br />

longer incremented, see Figure 6-9. This has the effect of stopping the learning once the upper<br />

bound is reached. Another option is to renormalize the weights whenever they reach the upper<br />

bound, i.e.<br />

If<br />

wij<br />

= w , then w<br />

max<br />

ij<br />

wij<br />

= ∀ i,<br />

j<br />

w<br />

max<br />

Such a renormalization would give more importance to events occurring after the normalization,<br />

unless one rescales the increment factorα by the same amount, i.e.<br />

1<br />

w .<br />

max<br />

Figure 6-9: Weight clipping<br />

This, however, will quickly lead the network to reach the limit of its floating point encoding for α .<br />

Another option is to renormalize at each time step, while conserving the length of the complete<br />

weight vector. The algorithm is performed in two steps:<br />

w = w +Δw<br />

w<br />

j j j<br />

j<br />

=<br />

w<br />

w<br />

j<br />

j<br />

This has the advantage to preserve one of the directions along which the weight matrix codes.<br />

However, it makes the whole computation heavier. Moreover, it has a tendency of not conserving<br />

the relative importance across weight vectors.<br />

Figure 6-10: Weight renormalization<br />

© A.G.Billard 2004 – Last Update March 2011

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!