Norton - Neural Networks and Machine Learning Lab - Brigham ...

Norton - Neural Networks and Machine Learning Lab - Brigham ... Norton - Neural Networks and Machine Learning Lab - Brigham ...

axon.cs.byu.edu
from axon.cs.byu.edu More from this publisher
27.11.2014 Views

Fig. 1 Diagram of a liquid state machine. (a, b) The input signal is transformed into a series of spikes via some function. (c) The spike train is introduced into the neural microcircuit, or “liquid”. (d) Snapshots of the state of the “liquid” are recorded in state vectors. (e) The state vectors are used as input to train a learning algorithm, the reading function. proportion to the temporal correlation between the pre- and post-synaptic neurons. If the pre-synaptic neuron fires first, then the weight is increased; if the post-synaptic neuron fires first then the weight is decreased. In this way, synapses that participate in a neuron’s breach of threshold (resulting in a spike) are strengthened while those that don’t are weakened [1]. We will refer to an LSM that uses STDP synapses as a Hebbian Liquid State Machine (HLSM). B. Separation The effectiveness of an LSM is a function of two qualities: the approximation and the separation of the LSM. Approximation refers to the reading function’s ability to classify the state vectors acquired from the liquid. Separation refers to “the amount of separation between trajectories of internal states of the system that are caused by two different input streams” [5]; or in other words, the ability of the liquid to produce discernibly different patterns when given different classes of inputs. Since any machine learning algorithm can be used for the reading function, this paper focuses on improving the separation attribute of the LSM. In order to measure the separation of a liquid we use the following definition presented first by Goodman [6]: N N Cm ( Oi ) − Cm ( O j ) 2 Sep(Ψ , O) = ∑∑ 2 i= 1 j= 1 N where Ψ is a neural microcircuit (or liquid), O is a set of state vectors, and N is the total number of output classes represented by O. O is divided into N subsets each of which contains all elements of O belonging to a common output class. The center of mass, C for each subset is calculated m as follows: ∑ o j o j ∈O C m ( Oi ) = O i The Euclidean distance between every pair wise combination of center of mass vectors is determined. Each center of mass is the estimate of the representative state vector for each output class. The ideal liquid would separate the state vectors for each output class into distinct clusters within the vector space. Goodman's research indicates that there is a positive correlation between this separation measure and the accuracy of the LSM [6]. There are certain negative behaviors common in LSMs that can significantly decrease their separation. These behaviors are termed pathological synchrony and overstratification. Pathological synchrony occurs when most of the neurons in the liquid get caught in infinite positive feedback loops with respect to their firing. These infinite loops continuously influence the state of the liquid overriding the flow of information from the input. In extreme cases the entire liquid can begin firing continuously in synchrony (see Fig. 2). Such liquids have low separation because of the loss of pattern associated with such crowded spiking. The opposite extreme is over-stratification—when groups of neurons do not propagate a series of spikes induced by an input spike long enough. In these cases, input spikes do not influence each other within the liquid, thus resulting in a loss of temporal coherency that can be represented by the liquid (see Fig. 3). Since both of these features are detrimental to the effectiveness of LSMs, this study emphasizes eliminating them. III. RESULTS Two experiments were performed in order to observe how Hebbian learning affects the structure and separation of liquids. Random input was introduced into the liquid for the first experiment while speech data was used as input for the second experiment. Also, Hebbian learning was compared

to random weight updates in the first experiment. A. Effects of Random Input on HLSMs For this experiment, the effect of a single channel of random input on an HLSM’s separation was tested in order to better understand the dynamics of the HLSM architecture. Two initial liquid states were explored, a pathological synchrony state and an over-stratification state. These initial states were selected in order to observe how Hebbian Learning could potentially recover from them. For each initial state, actual Hebbian learning versus random weight updates were compared. Each of these four experiments employed 100 iterations of training of the liquid. This training was either Hebbian learning or random weight update. For each iteration of training the separation of the liquid was determined with a set of state vectors, O, of size 100. Each state vector in O was created by introducing a randomly generated train of 25 spikes over a 1.0 second time interval, d, as input into the liquid. The state vector was measured at time d with ε = 1.0 ms. Since the input was random, each state vector belonged to a unique output class. Thus, for this experiment N = O (see section II.B above). Neurons Time (sec.) Fig. 2 Pathological Synchrony. This occurs when most of the neurons in the liquid get caught in infinite positive feedback loops with respect to their firing. Dots indicate the presence of a spike. Each liquid was prepared with 135 neurons to be comparable to previous research [7]. The input was encoded as a spike train from a single input neuron. Future research will expand the number of input neurons used. The remainder of the settings were chosen based on a series of preliminary experiments and reflect the best results obtained in those trials. The connection probability from the input neuron to the other inter-neurons was 0.1 while the probability of connection between the inter-neurons was 0.05. The mean delay for the synapses of the liquid was 10 ms with a standard deviation of 1 ms. For the liquids initiated in a pathological synchrony state, the mean weight −7 value for synapses was set at 1× 10 while the mean weight value in liquids initiated in an over-stratification state −8 was 8 × 10 . The standard deviation of weight values for −8 both initial states was1× 10 . Neurons Time (sec.) Fig.3 Over-Stratification. The x-axis is time from zero to one seconds. The y-axis contains neuron identification numbers. Dots indicate spikes For the Hebbian learning, all STDP Synapse settings were selected based on preliminary experiments. The maximum weight value allowed for all synapses in both Hebbian −5 learning and random weight update experiments was1× 10 . Each training of the liquid involved introducing a randomly generated train of 25 spikes over a 1.0 second time interval. This is identical to when separation data is collected except that the weights are now allowed to change in accordance with the STDP synapse rules outlined earlier. For random weight updates, each synapse’s weight was updated by a value drawn from a normal distribution with a −8 mean of − 2.8742 × 10 and a standard deviation of 1.5726 obtained by calculating and averaging the mean and standard deviation of weight changes in ten preliminary runs through unique liquids using Hebbian Learning. Thus, the values, though random, represent reasonable changes in weight for the liquids in this study. −7 × 10 . This mean and standard deviation were The results of the above experiment are seen in Figure 4. For each of the four experiments, the average separation of ten unique liquids is displayed. The Hebbian learning trials don’t show a significant change in separation while the random weight update trials drop steadily in separation after only ten iterations.

to r<strong>and</strong>om weight updates in the first experiment.<br />

A. Effects of R<strong>and</strong>om Input on HLSMs<br />

For this experiment, the effect of a single channel of<br />

r<strong>and</strong>om input on an HLSM’s separation was tested in order<br />

to better underst<strong>and</strong> the dynamics of the HLSM architecture.<br />

Two initial liquid states were explored, a pathological<br />

synchrony state <strong>and</strong> an over-stratification state. These initial<br />

states were selected in order to observe how Hebbian<br />

<strong>Learning</strong> could potentially recover from them. For each<br />

initial state, actual Hebbian learning versus r<strong>and</strong>om weight<br />

updates were compared. Each of these four experiments<br />

employed 100 iterations of training of the liquid. This<br />

training was either Hebbian learning or r<strong>and</strong>om weight<br />

update. For each iteration of training the separation of the<br />

liquid was determined with a set of state vectors, O, of size<br />

100. Each state vector in O was created by introducing a<br />

r<strong>and</strong>omly generated train of 25 spikes over a 1.0 second time<br />

interval, d, as input into the liquid. The state vector was<br />

measured at time d with ε = 1.0 ms. Since the input was<br />

r<strong>and</strong>om, each state vector belonged to a unique output class.<br />

Thus, for this experiment N = O (see section II.B above).<br />

Neurons<br />

Time (sec.)<br />

Fig. 2 Pathological Synchrony. This occurs when most of<br />

the neurons in the liquid get caught in infinite positive<br />

feedback loops with respect to their firing. Dots indicate the<br />

presence of a spike.<br />

Each liquid was prepared with 135 neurons to be<br />

comparable to previous research [7]. The input was encoded<br />

as a spike train from a single input neuron. Future research<br />

will exp<strong>and</strong> the number of input neurons used. The<br />

remainder of the settings were chosen based on a series of<br />

preliminary experiments <strong>and</strong> reflect the best results obtained<br />

in those trials. The connection probability from the input<br />

neuron to the other inter-neurons was 0.1 while the<br />

probability of connection between the inter-neurons was<br />

0.05. The mean delay for the synapses of the liquid was 10<br />

ms with a st<strong>and</strong>ard deviation of 1 ms. For the liquids<br />

initiated in a pathological synchrony state, the mean weight<br />

−7<br />

value for synapses was set at 1× 10 while the mean weight<br />

value in liquids initiated in an over-stratification state<br />

−8<br />

was 8 × 10 . The st<strong>and</strong>ard deviation of weight values for<br />

−8<br />

both initial states was1× 10 .<br />

Neurons<br />

Time (sec.)<br />

Fig.3 Over-Stratification. The x-axis is time from<br />

zero to one seconds. The y-axis contains neuron<br />

identification numbers. Dots indicate spikes<br />

For the Hebbian learning, all STDP Synapse settings were<br />

selected based on preliminary experiments. The maximum<br />

weight value allowed for all synapses in both Hebbian<br />

−5<br />

learning <strong>and</strong> r<strong>and</strong>om weight update experiments was1× 10 .<br />

Each training of the liquid involved introducing a r<strong>and</strong>omly<br />

generated train of 25 spikes over a 1.0 second time interval.<br />

This is identical to when separation data is collected except<br />

that the weights are now allowed to change in accordance<br />

with the STDP synapse rules outlined earlier.<br />

For r<strong>and</strong>om weight updates, each synapse’s weight was<br />

updated by a value drawn from a normal distribution with a<br />

−8<br />

mean of − 2.8742 × 10 <strong>and</strong> a st<strong>and</strong>ard deviation of<br />

1.5726<br />

obtained by calculating <strong>and</strong> averaging the mean <strong>and</strong> st<strong>and</strong>ard<br />

deviation of weight changes in ten preliminary runs through<br />

unique liquids using Hebbian <strong>Learning</strong>. Thus, the values,<br />

though r<strong>and</strong>om, represent reasonable changes in weight for<br />

the liquids in this study.<br />

−7<br />

× 10 . This mean <strong>and</strong> st<strong>and</strong>ard deviation were<br />

The results of the above experiment are seen in Figure 4.<br />

For each of the four experiments, the average separation of<br />

ten unique liquids is displayed. The Hebbian learning trials<br />

don’t show a significant change in separation while the<br />

r<strong>and</strong>om weight update trials drop steadily in separation after<br />

only ten iterations.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!