MACHINE LEARNING TECHNIQUES - LASA
MACHINE LEARNING TECHNIQUES - LASA MACHINE LEARNING TECHNIQUES - LASA
158 Figure 6-17: To determine the convergence of a leaky-integrator neuron with a self-connection, one must find numerically the value m* for which the derivative of the membrane potential m is zero. Here we have a * single stable point m = 10 for a steady input S=30 with a negative self-connection w 11 =− 20 . The zeros of the derivative correspond to equilibrium points. These may however be stable or unstable. A stable point is such that if it slightly pushed away from the stable point, it will eventually come back to its equilibrium. In contrast, an unstable point is such that a small perturbation in the input may send the system away from its equilibrium. In Leak-Integrator Neurons, this is easily achieved and depends on the value of the different parameters, but especially that of the self-connection. Figure 6-17 and Figure 6-18 illustrate these two cases. Figure 6-18: Example of a leaky-integrator neuron for a steady input S=30 with a positive selfconnection w 11 = 20. The system has three equilibrium points at m=0, m=-10 and m=10. m=0 is an unstable point, whereas m=-10 and m=10 are two stable points. This can be seen by observing that the slope of m(s) around m=0 is positive, whereas it is negative for m=-10 and m=10 (see left figure). Stability of the equilibrium points can be determined by looking at the direction of the slope of the derivative of the membrane potential around the equilibrium point, i.e.: dm 1 f( m) = ( m S w11x) dt = τ − + + (6.80) © A.G.Billard 2004 – Last Update March 2011
159 ∂f If the slope is negative, i.e. ( m) < 0 (note that here we are exploring the function in input space ∂m and not in time), then the equilibrium point is stable. Intuitively, this is easy to see. The slope gives you the direction in which you are pulled. With a negative slope you are pulled back to where you were. With a positive slope you are pulled away from where you were, usually in the direction of another equilibrium point. Given that the dynamics of a single neuron can already become very complex, the dynamics of groups of such neurons becomes rapidly intractable. In a seminal paper, Randall Beer (Beer Adaptive Behavior, Vol 3 No 4, 1995) showed how the dynamics of a group of only two interconnected leaky-integrator neurons could lead to four complex dynamics. Figure 6-19 shows the possible trajectories of values taken by the outputs of the two neurons. Such a plot is called a phase plot. Figure 6-19: Possible dynamics that can be generated by a network composed of two leaky-integrator neurons (R. Beer, Adaptive Behavior, 1995). © A.G.Billard 2004 – Last Update March 2011
- Page 107 and 108: 107 To better understand the effect
- Page 109 and 110: 109 5.9 Gaussian Process Regression
- Page 111 and 112: 111 One can then use the above expr
- Page 113 and 114: 113 5.9.2 Equivalence of Gaussian P
- Page 115 and 116: 115 5.9.3 Curse of dimensionality,
- Page 117 and 118: 117 The weight w determines the slo
- Page 119 and 120: 119 Figure 5-21: Example of success
- Page 121 and 122: 121 • its performance tends to de
- Page 123 and 124: 123 neurons. Furthermore, they lear
- Page 125 and 126: 125 The sigmoid f x ( x) ( ) = tanh
- Page 127 and 128: 127 6.3.2 Information Theory and th
- Page 129 and 130: 129 ( R) ⎛⎛det ⎞⎞ I( x, y)
- Page 131 and 132: 131 y= ∑ w x ) of Because of the
- Page 133 and 134: 133 6.5 Willshaw net David Willshaw
- Page 135 and 136: 135 6.6.1 Weights bounds One of the
- Page 137 and 138: 137 Figure 6-11: The weight vector
- Page 139 and 140: 139 6.6.4 Oja’s one Neuron Model
- Page 141 and 142: 141 If y i and y are highly correla
- Page 143 and 144: 143 Foldiak’s second model allows
- Page 145 and 146: 145 ∂ ∂ J 1 = fy − 1 2 λ 1yf
- Page 147 and 148: 147 6.8 The Self-Organizing Map (SO
- Page 149 and 150: 149 6. Decrease the size of the nei
- Page 151 and 152: 151 the resulting distribution is a
- Page 153 and 154: 153 To simplify the description of
- Page 155 and 156: 155 C µν −1 where ( ) is the µ
- Page 157: 157 The continuous time Hopfield ne
- Page 161 and 162: 161 7.2 Hidden Markov Models Hidden
- Page 163 and 164: 163 Figure 7-2: Schematic illustrat
- Page 165 and 166: 165 these two quantities to compute
- Page 167 and 168: 167 7.2.4 Decoding an HMM There are
- Page 169 and 170: 169 7.2.5 Further Readings Rabiner,
- Page 171 and 172: 171 7.3.1 Principle In reinforcemen
- Page 173 and 174: 173 general, acting to maximize imm
- Page 175 and 176: 175 of reinforcement learning makes
- Page 177 and 178: 177 8 Genetic Algorithms We conclud
- Page 179 and 180: 179 However, you must define geneti
- Page 181 and 182: 181 Often the crossover operator an
- Page 183 and 184: 183 ( A λI) x 0 − = (8.5) where
- Page 185 and 186: 185 Joint probability: The joint pr
- Page 187 and 188: 187 The two most classical distribu
- Page 189 and 190: 189 9.2.7 Statistical Independence
- Page 191 and 192: 191 1 1 1 1 h x ∫ a b a b 0 a a
- Page 193 and 194: 193 9.4 Estimators 9.4.1 Gradient d
- Page 195 and 196: 195 9.4.2.1 Maximum Likelihood Mach
- Page 197 and 198: 197 10 References • Machine Learn
159<br />
∂f If the slope is negative, i.e.<br />
( m)<br />
< 0 (note that here we are exploring the function in input space<br />
∂m<br />
and not in time), then the equilibrium point is stable. Intuitively, this is easy to see. The slope<br />
gives you the direction in which you are pulled. With a negative slope you are pulled back to<br />
where you were. With a positive slope you are pulled away from where you were, usually in the<br />
direction of another equilibrium point.<br />
Given that the dynamics of a single neuron can already become very complex, the dynamics of<br />
groups of such neurons becomes rapidly intractable. In a seminal paper, Randall Beer (Beer<br />
Adaptive Behavior, Vol 3 No 4, 1995) showed how the dynamics of a group of only two<br />
interconnected leaky-integrator neurons could lead to four complex dynamics. Figure 6-19 shows<br />
the possible trajectories of values taken by the outputs of the two neurons. Such a plot is called a<br />
phase plot.<br />
Figure 6-19: Possible dynamics that can be generated by a network composed of two leaky-integrator<br />
neurons (R. Beer, Adaptive Behavior, 1995).<br />
© A.G.Billard 2004 – Last Update March 2011