MACHINE LEARNING TECHNIQUES - LASA

MACHINE LEARNING TECHNIQUES - LASA MACHINE LEARNING TECHNIQUES - LASA

01.11.2014 Views

150 6.8.2 Bayesian self-organizing map Adapted from Yin, H.; Allinson, N.M. Self-organizing mixture networks for probability density estimation IEEE Transactions on Neural Networks, Volume 12, Issue 2, March 2001 Page(s):405 – 411. The Bayesian self-organizing map (BSOM) is a method for estimating a probability distribution generating data points on the basis of a Bayesian stochastic model. BSOM can be used to estimate the parameters of Gaussian Mixture Models (GMM). In this case, BSOM estimates GMM parameters by minimizing the Kullback–Leibler information metric and as such provides an alternative to the classical Expectation-Maximization (EM) method for estimating the GMM parameters which performs better in terms of both convergence speed and escape of local minima. Since BSOM makes no assumption on the form of the distribution (thanks to the KL metric), it can be applied to estimate other mixture of distribution than purely Gaussian distribution. The term SOM in BSOM is due to the fact that the update rule uses a concept similar to the neighborhood of the update rule in self-organizing map (SOM) neural network. BSOM creates a set K probability density distribution (PDF). At the instar of EM that updates the parameters of the pdf globally so as to maximize the total probability that the mixture explains well the data, BSOM updates each pdf separately by using only a subset of the data located in a neightbourhood around the data point for which the pdf is maximal. The advantage is that the update and estimation steps are faster. However, the drawback is that like Kohonen network, the algorithm depends very much on the choice of the hyperparameters (nm of pdf, size of the neighbourhood, learning rate). 6.8.2.1 Stochastic Model and Parameter Estimation BSOM proceeds as follows. Suppose that the distribution of data points is given by p( x ). BSOM will build an estimate ˆp( x ) by constructing a mixture distribution of K probability distribution functions (pdf) p ( x ) with associated parametersθ i , i = 1... K, on a d-dimensional input space i x. If P ,... 1 P are the prior probabilities of each pdf, then the joint probability density for each data K sample is given by: K pˆ x p x| P ( ) ( θ ) = ∑ (6.61) i= 1 i i i BSOM will update incrementally the mixture so as to minimize the divergence between the two distributions measured by the Kullback-Leibler metric: I ( ) ( ) ⎛⎛ ˆ log p x ⎞⎞ =− p( x) dx ⎜⎜ p x ⎟⎟ ⎝⎝ ⎠⎠ ∫ (6.62) The update step (or M-step by analogy to EM) consists of re-estimating the parameters of each pdf by minimizing I via its partial derivatives over each parameter ∂I ∂θ ij under the constraint that © A.G.Billard 2004 – Last Update March 2011

151 the resulting distribution is a valid probability, i.e. K ∑ i= 1 P = 1 resolved through Lagrange and gives the following update step: i . Such an optimization problem can be ( + 1 ) = ( ) − ( ) ( | ) − ( ) Pi n Pi n α n ⎡⎡⎣⎣P i x Pi n ⎤⎤⎦⎦ (6.63) where n is the update step and α is the learning rate. To speed up the computation, the updating of the above parameters is then limited to a small neighborhood around the “winning node”, i.e. the pdf i which has the largest posterior probability ( | ) pi x . The distribution for a given point x is then approximated by a mixture of a small number of nodes at one time, i.e. with η a neighborhood around the winner. ( ) ( θ ) pˆ x ≈ ∑ p x| P (6.64) i∈η i i i Learning thus proceeds as in Kohonen. At each iteration step n, one picks a data point at random, selects the winning pdf for this data points and then update the parameters for all the pdf located in a neighborhood of the winning distribution (i.e. the pdf for which the maximum is located on a point in a neighborhood of the data point chosen here). 6.9 Static Hopfield Network In his 1982 seminal paper, John J. Hopfield [Hopfield, 1982] presented a novel type of associative memory, consisting in a fully connected artificial neural network, later called the Hopfield Network. This work was one of the first demonstrations that models of physical systems, such as neural networks, could be used to solve computational problems. Later, follow-up on this research showed that such systems could be implemented in hardware by combining standard components such as capacitors and resistors. The original Hopfield network, which we will refer to as the static Hopfield Network, consists of a one-layer fully recurrent neural network. Subsequent development led Hopfield and coworkers to develop different extensions. The major ones are the Bolzmann Machine, in which a parameter (the temperature) can be varied so as to increase the capacity of the model, and the continuous time Hopfield, whose neurons dynamics is continuous over time. In this class, we will consider only the static and continuous time Hopfiel model. The interested reader can refer to [Barna & Kaski, 1989; Ackley et al, 1985] for an introduction to the Boltzmann Machine. The Hopfield network is created by supplying input data vectors, or pattern vectors, corresponding to the different classes. These patterns are called class patterns. In an n- dimensional data space the class patterns should have n binary components {1,-1}; that is, each class pattern corresponds to a corner of a cube in an n-dimensional space. The network is then used to classify distorted patterns into these classes. When a distorted pattern is presented to the network, then it is associated with another pattern. If the network works properly, this associated pattern is one of the class patterns. In some cases (when the different class patterns are © A.G.Billard 2004 – Last Update March 2011

151<br />

the resulting distribution is a valid probability, i.e.<br />

K<br />

∑<br />

i=<br />

1<br />

P = 1<br />

resolved through Lagrange and gives the following update step:<br />

i<br />

. Such an optimization problem can be<br />

( + 1 ) = ( ) − ( ) ( | ) − ( )<br />

Pi n Pi n α n ⎡⎡⎣⎣P i x Pi<br />

n ⎤⎤⎦⎦<br />

(6.63)<br />

where n is the update step and α is the learning rate.<br />

To speed up the computation, the updating of the above parameters is then limited to a small<br />

neighborhood around the “winning node”, i.e. the pdf i which has the largest posterior<br />

probability ( | )<br />

pi x . The distribution for a given point x is then approximated by a mixture of a<br />

small number of nodes at one time, i.e.<br />

with η a neighborhood around the winner.<br />

( ) ( θ )<br />

pˆ x ≈ ∑ p x|<br />

P<br />

(6.64)<br />

i∈η<br />

i i i<br />

Learning thus proceeds as in Kohonen. At each iteration step n, one picks a data point at<br />

random, selects the winning pdf for this data points and then update the parameters for all the pdf<br />

located in a neighborhood of the winning distribution (i.e. the pdf for which the maximum is<br />

located on a point in a neighborhood of the data point chosen here).<br />

6.9 Static Hopfield Network<br />

In his 1982 seminal paper, John J. Hopfield [Hopfield, 1982] presented a novel type of associative<br />

memory, consisting in a fully connected artificial neural network, later called the Hopfield Network.<br />

This work was one of the first demonstrations that models of physical systems, such as neural<br />

networks, could be used to solve computational problems. Later, follow-up on this research<br />

showed that such systems could be implemented in hardware by combining standard<br />

components such as capacitors and resistors.<br />

The original Hopfield network, which we will refer to as the static Hopfield Network, consists of a<br />

one-layer fully recurrent neural network. Subsequent development led Hopfield and coworkers to<br />

develop different extensions. The major ones are the Bolzmann Machine, in which a parameter<br />

(the temperature) can be varied so as to increase the capacity of the model, and the continuous<br />

time Hopfield, whose neurons dynamics is continuous over time. In this class, we will consider<br />

only the static and continuous time Hopfiel model. The interested reader can refer to [Barna &<br />

Kaski, 1989; Ackley et al, 1985] for an introduction to the Boltzmann Machine.<br />

The Hopfield network is created by supplying input data vectors, or pattern vectors,<br />

corresponding to the different classes. These patterns are called class patterns. In an n-<br />

dimensional data space the class patterns should have n binary components {1,-1}; that is, each<br />

class pattern corresponds to a corner of a cube in an n-dimensional space. The network is then<br />

used to classify distorted patterns into these classes. When a distorted pattern is presented to the<br />

network, then it is associated with another pattern. If the network works properly, this associated<br />

pattern is one of the class patterns. In some cases (when the different class patterns are<br />

© A.G.Billard 2004 – Last Update March 2011

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!