MACHINE LEARNING TECHNIQUES - LASA
MACHINE LEARNING TECHNIQUES - LASA MACHINE LEARNING TECHNIQUES - LASA
150 6.8.2 Bayesian self-organizing map Adapted from Yin, H.; Allinson, N.M. Self-organizing mixture networks for probability density estimation IEEE Transactions on Neural Networks, Volume 12, Issue 2, March 2001 Page(s):405 – 411. The Bayesian self-organizing map (BSOM) is a method for estimating a probability distribution generating data points on the basis of a Bayesian stochastic model. BSOM can be used to estimate the parameters of Gaussian Mixture Models (GMM). In this case, BSOM estimates GMM parameters by minimizing the Kullback–Leibler information metric and as such provides an alternative to the classical Expectation-Maximization (EM) method for estimating the GMM parameters which performs better in terms of both convergence speed and escape of local minima. Since BSOM makes no assumption on the form of the distribution (thanks to the KL metric), it can be applied to estimate other mixture of distribution than purely Gaussian distribution. The term SOM in BSOM is due to the fact that the update rule uses a concept similar to the neighborhood of the update rule in self-organizing map (SOM) neural network. BSOM creates a set K probability density distribution (PDF). At the instar of EM that updates the parameters of the pdf globally so as to maximize the total probability that the mixture explains well the data, BSOM updates each pdf separately by using only a subset of the data located in a neightbourhood around the data point for which the pdf is maximal. The advantage is that the update and estimation steps are faster. However, the drawback is that like Kohonen network, the algorithm depends very much on the choice of the hyperparameters (nm of pdf, size of the neighbourhood, learning rate). 6.8.2.1 Stochastic Model and Parameter Estimation BSOM proceeds as follows. Suppose that the distribution of data points is given by p( x ). BSOM will build an estimate ˆp( x ) by constructing a mixture distribution of K probability distribution functions (pdf) p ( x ) with associated parametersθ i , i = 1... K, on a d-dimensional input space i x. If P ,... 1 P are the prior probabilities of each pdf, then the joint probability density for each data K sample is given by: K pˆ x p x| P ( ) ( θ ) = ∑ (6.61) i= 1 i i i BSOM will update incrementally the mixture so as to minimize the divergence between the two distributions measured by the Kullback-Leibler metric: I ( ) ( ) ⎛⎛ ˆ log p x ⎞⎞ =− p( x) dx ⎜⎜ p x ⎟⎟ ⎝⎝ ⎠⎠ ∫ (6.62) The update step (or M-step by analogy to EM) consists of re-estimating the parameters of each pdf by minimizing I via its partial derivatives over each parameter ∂I ∂θ ij under the constraint that © A.G.Billard 2004 – Last Update March 2011
151 the resulting distribution is a valid probability, i.e. K ∑ i= 1 P = 1 resolved through Lagrange and gives the following update step: i . Such an optimization problem can be ( + 1 ) = ( ) − ( ) ( | ) − ( ) Pi n Pi n α n ⎡⎡⎣⎣P i x Pi n ⎤⎤⎦⎦ (6.63) where n is the update step and α is the learning rate. To speed up the computation, the updating of the above parameters is then limited to a small neighborhood around the “winning node”, i.e. the pdf i which has the largest posterior probability ( | ) pi x . The distribution for a given point x is then approximated by a mixture of a small number of nodes at one time, i.e. with η a neighborhood around the winner. ( ) ( θ ) pˆ x ≈ ∑ p x| P (6.64) i∈η i i i Learning thus proceeds as in Kohonen. At each iteration step n, one picks a data point at random, selects the winning pdf for this data points and then update the parameters for all the pdf located in a neighborhood of the winning distribution (i.e. the pdf for which the maximum is located on a point in a neighborhood of the data point chosen here). 6.9 Static Hopfield Network In his 1982 seminal paper, John J. Hopfield [Hopfield, 1982] presented a novel type of associative memory, consisting in a fully connected artificial neural network, later called the Hopfield Network. This work was one of the first demonstrations that models of physical systems, such as neural networks, could be used to solve computational problems. Later, follow-up on this research showed that such systems could be implemented in hardware by combining standard components such as capacitors and resistors. The original Hopfield network, which we will refer to as the static Hopfield Network, consists of a one-layer fully recurrent neural network. Subsequent development led Hopfield and coworkers to develop different extensions. The major ones are the Bolzmann Machine, in which a parameter (the temperature) can be varied so as to increase the capacity of the model, and the continuous time Hopfield, whose neurons dynamics is continuous over time. In this class, we will consider only the static and continuous time Hopfiel model. The interested reader can refer to [Barna & Kaski, 1989; Ackley et al, 1985] for an introduction to the Boltzmann Machine. The Hopfield network is created by supplying input data vectors, or pattern vectors, corresponding to the different classes. These patterns are called class patterns. In an n- dimensional data space the class patterns should have n binary components {1,-1}; that is, each class pattern corresponds to a corner of a cube in an n-dimensional space. The network is then used to classify distorted patterns into these classes. When a distorted pattern is presented to the network, then it is associated with another pattern. If the network works properly, this associated pattern is one of the class patterns. In some cases (when the different class patterns are © A.G.Billard 2004 – Last Update March 2011
- Page 99 and 100: 99 5.8 Support Vector Regression In
- Page 101 and 102: 101 The optimization problem given
- Page 103 and 104: 103 Note that since we never have t
- Page 105 and 106: 105 Figure 5-13: Effect of the kern
- Page 107 and 108: 107 To better understand the effect
- Page 109 and 110: 109 5.9 Gaussian Process Regression
- Page 111 and 112: 111 One can then use the above expr
- Page 113 and 114: 113 5.9.2 Equivalence of Gaussian P
- Page 115 and 116: 115 5.9.3 Curse of dimensionality,
- Page 117 and 118: 117 The weight w determines the slo
- Page 119 and 120: 119 Figure 5-21: Example of success
- Page 121 and 122: 121 • its performance tends to de
- Page 123 and 124: 123 neurons. Furthermore, they lear
- Page 125 and 126: 125 The sigmoid f x ( x) ( ) = tanh
- Page 127 and 128: 127 6.3.2 Information Theory and th
- Page 129 and 130: 129 ( R) ⎛⎛det ⎞⎞ I( x, y)
- Page 131 and 132: 131 y= ∑ w x ) of Because of the
- Page 133 and 134: 133 6.5 Willshaw net David Willshaw
- Page 135 and 136: 135 6.6.1 Weights bounds One of the
- Page 137 and 138: 137 Figure 6-11: The weight vector
- Page 139 and 140: 139 6.6.4 Oja’s one Neuron Model
- Page 141 and 142: 141 If y i and y are highly correla
- Page 143 and 144: 143 Foldiak’s second model allows
- Page 145 and 146: 145 ∂ ∂ J 1 = fy − 1 2 λ 1yf
- Page 147 and 148: 147 6.8 The Self-Organizing Map (SO
- Page 149: 149 6. Decrease the size of the nei
- Page 153 and 154: 153 To simplify the description of
- Page 155 and 156: 155 C µν −1 where ( ) is the µ
- Page 157 and 158: 157 The continuous time Hopfield ne
- Page 159 and 160: 159 ∂f If the slope is negative,
- Page 161 and 162: 161 7.2 Hidden Markov Models Hidden
- Page 163 and 164: 163 Figure 7-2: Schematic illustrat
- Page 165 and 166: 165 these two quantities to compute
- Page 167 and 168: 167 7.2.4 Decoding an HMM There are
- Page 169 and 170: 169 7.2.5 Further Readings Rabiner,
- Page 171 and 172: 171 7.3.1 Principle In reinforcemen
- Page 173 and 174: 173 general, acting to maximize imm
- Page 175 and 176: 175 of reinforcement learning makes
- Page 177 and 178: 177 8 Genetic Algorithms We conclud
- Page 179 and 180: 179 However, you must define geneti
- Page 181 and 182: 181 Often the crossover operator an
- Page 183 and 184: 183 ( A λI) x 0 − = (8.5) where
- Page 185 and 186: 185 Joint probability: The joint pr
- Page 187 and 188: 187 The two most classical distribu
- Page 189 and 190: 189 9.2.7 Statistical Independence
- Page 191 and 192: 191 1 1 1 1 h x ∫ a b a b 0 a a
- Page 193 and 194: 193 9.4 Estimators 9.4.1 Gradient d
- Page 195 and 196: 195 9.4.2.1 Maximum Likelihood Mach
- Page 197 and 198: 197 10 References • Machine Learn
151<br />
the resulting distribution is a valid probability, i.e.<br />
K<br />
∑<br />
i=<br />
1<br />
P = 1<br />
resolved through Lagrange and gives the following update step:<br />
i<br />
. Such an optimization problem can be<br />
( + 1 ) = ( ) − ( ) ( | ) − ( )<br />
Pi n Pi n α n ⎡⎡⎣⎣P i x Pi<br />
n ⎤⎤⎦⎦<br />
(6.63)<br />
where n is the update step and α is the learning rate.<br />
To speed up the computation, the updating of the above parameters is then limited to a small<br />
neighborhood around the “winning node”, i.e. the pdf i which has the largest posterior<br />
probability ( | )<br />
pi x . The distribution for a given point x is then approximated by a mixture of a<br />
small number of nodes at one time, i.e.<br />
with η a neighborhood around the winner.<br />
( ) ( θ )<br />
pˆ x ≈ ∑ p x|<br />
P<br />
(6.64)<br />
i∈η<br />
i i i<br />
Learning thus proceeds as in Kohonen. At each iteration step n, one picks a data point at<br />
random, selects the winning pdf for this data points and then update the parameters for all the pdf<br />
located in a neighborhood of the winning distribution (i.e. the pdf for which the maximum is<br />
located on a point in a neighborhood of the data point chosen here).<br />
6.9 Static Hopfield Network<br />
In his 1982 seminal paper, John J. Hopfield [Hopfield, 1982] presented a novel type of associative<br />
memory, consisting in a fully connected artificial neural network, later called the Hopfield Network.<br />
This work was one of the first demonstrations that models of physical systems, such as neural<br />
networks, could be used to solve computational problems. Later, follow-up on this research<br />
showed that such systems could be implemented in hardware by combining standard<br />
components such as capacitors and resistors.<br />
The original Hopfield network, which we will refer to as the static Hopfield Network, consists of a<br />
one-layer fully recurrent neural network. Subsequent development led Hopfield and coworkers to<br />
develop different extensions. The major ones are the Bolzmann Machine, in which a parameter<br />
(the temperature) can be varied so as to increase the capacity of the model, and the continuous<br />
time Hopfield, whose neurons dynamics is continuous over time. In this class, we will consider<br />
only the static and continuous time Hopfiel model. The interested reader can refer to [Barna &<br />
Kaski, 1989; Ackley et al, 1985] for an introduction to the Boltzmann Machine.<br />
The Hopfield network is created by supplying input data vectors, or pattern vectors,<br />
corresponding to the different classes. These patterns are called class patterns. In an n-<br />
dimensional data space the class patterns should have n binary components {1,-1}; that is, each<br />
class pattern corresponds to a corner of a cube in an n-dimensional space. The network is then<br />
used to classify distorted patterns into these classes. When a distorted pattern is presented to the<br />
network, then it is associated with another pattern. If the network works properly, this associated<br />
pattern is one of the class patterns. In some cases (when the different class patterns are<br />
© A.G.Billard 2004 – Last Update March 2011