MACHINE LEARNING TECHNIQUES - LASA

MACHINE LEARNING TECHNIQUES - LASA MACHINE LEARNING TECHNIQUES - LASA

01.11.2014 Views

158 Figure 6-17: To determine the convergence of a leaky-integrator neuron with a self-connection, one must find numerically the value m* for which the derivative of the membrane potential m is zero. Here we have a * single stable point m = 10 for a steady input S=30 with a negative self-connection w 11 =− 20 . The zeros of the derivative correspond to equilibrium points. These may however be stable or unstable. A stable point is such that if it slightly pushed away from the stable point, it will eventually come back to its equilibrium. In contrast, an unstable point is such that a small perturbation in the input may send the system away from its equilibrium. In Leak-Integrator Neurons, this is easily achieved and depends on the value of the different parameters, but especially that of the self-connection. Figure 6-17 and Figure 6-18 illustrate these two cases. Figure 6-18: Example of a leaky-integrator neuron for a steady input S=30 with a positive selfconnection w 11 = 20. The system has three equilibrium points at m=0, m=-10 and m=10. m=0 is an unstable point, whereas m=-10 and m=10 are two stable points. This can be seen by observing that the slope of m(s) around m=0 is positive, whereas it is negative for m=-10 and m=10 (see left figure). Stability of the equilibrium points can be determined by looking at the direction of the slope of the derivative of the membrane potential around the equilibrium point, i.e.: dm 1 f( m) = ( m S w11x) dt = τ − + + (6.80) © A.G.Billard 2004 – Last Update March 2011

159 ∂f If the slope is negative, i.e. ( m) < 0 (note that here we are exploring the function in input space ∂m and not in time), then the equilibrium point is stable. Intuitively, this is easy to see. The slope gives you the direction in which you are pulled. With a negative slope you are pulled back to where you were. With a positive slope you are pulled away from where you were, usually in the direction of another equilibrium point. Given that the dynamics of a single neuron can already become very complex, the dynamics of groups of such neurons becomes rapidly intractable. In a seminal paper, Randall Beer (Beer Adaptive Behavior, Vol 3 No 4, 1995) showed how the dynamics of a group of only two interconnected leaky-integrator neurons could lead to four complex dynamics. Figure 6-19 shows the possible trajectories of values taken by the outputs of the two neurons. Such a plot is called a phase plot. Figure 6-19: Possible dynamics that can be generated by a network composed of two leaky-integrator neurons (R. Beer, Adaptive Behavior, 1995). © A.G.Billard 2004 – Last Update March 2011

159<br />

∂f If the slope is negative, i.e.<br />

( m)<br />

< 0 (note that here we are exploring the function in input space<br />

∂m<br />

and not in time), then the equilibrium point is stable. Intuitively, this is easy to see. The slope<br />

gives you the direction in which you are pulled. With a negative slope you are pulled back to<br />

where you were. With a positive slope you are pulled away from where you were, usually in the<br />

direction of another equilibrium point.<br />

Given that the dynamics of a single neuron can already become very complex, the dynamics of<br />

groups of such neurons becomes rapidly intractable. In a seminal paper, Randall Beer (Beer<br />

Adaptive Behavior, Vol 3 No 4, 1995) showed how the dynamics of a group of only two<br />

interconnected leaky-integrator neurons could lead to four complex dynamics. Figure 6-19 shows<br />

the possible trajectories of values taken by the outputs of the two neurons. Such a plot is called a<br />

phase plot.<br />

Figure 6-19: Possible dynamics that can be generated by a network composed of two leaky-integrator<br />

neurons (R. Beer, Adaptive Behavior, 1995).<br />

© A.G.Billard 2004 – Last Update March 2011

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!