MACHINE LEARNING TECHNIQUES - LASA

MACHINE LEARNING TECHNIQUES - LASA MACHINE LEARNING TECHNIQUES - LASA

01.11.2014 Views

52 The estimation of Mixture of Gaussians we reviewed in the previous section assumed a diagonal covariance matrix for each Gaussian. This is rather restrictive as it forces the Gaussians to have their axes aligned with the original axes of the data, see Figure 3-14. As a result, they are illsuited for estimating highly non-linear densities that have local correlations. Figure 3-14: A spherical Mixture of Gaussian Model can only fit Gaussians whose axes are aligned with the data axes. It may require far more local models to account for local correlations across the dimensions of the data X. Figure 3-15: In contrast, a Gaussian Mixture Model can exploit local correlations and adapt the covariance matrix of each Gaussian so that it aligns with the local direction of correlation. Gaussian Mixture Model is a generic method that fits a mixture of Gaussians with full covariance matrices. Each Gaussian serves as a local linear model and in effect performs a local PCA on the data. By mixing the effect of all these local models, one can obtain a very complex density estimate, see as an illustration the schematics in Figure 3-14 and Figure 3-15. j j= Let suppose that we have at our disposal { } 1,... X i M = x the set of M N-dimensional data points i= 1,... N whose density we wish to estimate. We consider that these are observed instances of the variable x . The probability distribution function or density of x is further assumed to be k k composed of a mixture of k = 1,.., K Gaussians with mean ∑ , i.e. µ and covariance matrix K k k k k k k ( ) = α k ⋅ k( | µ , ∑ ), with k( | µ , ∑ ) = ( µ , ∑ ) ∑ (3.14) p x p x p x N k= 1 © A.G.Billard 2004 – Last Update March 2011

53 Theα are the so-called mixing coefficients. Their sum is 1, i.e. k K ∑ k = 1 α = 1 k . These coefficients are usually estimated together with the estimation of the parameters of the Gaussians (i.e. the means and covariance matrices). In some cases, they can however be set to a constant. For instance to give equal weight to each Gaussian (in this case, α 1 k = ∀ k = 1,... K). When the K coefficients are estimated, they end up representing a measure of the proportion of data points that belong most to that particular Gaussian (this is similar to the definition of theα seen in the case of Mixture of Gaussians in the previous section). In a probabilistic sense, these coefficients represent the prior probability with which each Gaussian may have generated the whole dataset M j and can hence be written αk = p( k) =∑ p( k| x) . j= 1 Learning of GMM requires determining the means and covariance matrices and prior probabilities of the K Gaussians. The most popular method relies on Expectation-Maximization (E-M). We advise the reader to take a detour here and read the tutorial by Gilmes provided in the annexes of these lectures notes for a full derivation of GMM parameters estimation through E-M. We briefly summarize the principle next. We want to maximize the likelihood of the model’s parameters 1 1 1 { α ,..... α K , µ ,..... µ K , ,..... K ,} Θ= ∑ ∑ given the data, that is: Assuming that the set of M datapoints { } j= 1 (iid), we get: ( ) ( ) max L Θ | X = max p X | Θ (3.15) Θ X Θ j M = x is identically and independently distributed M K j k k ( Θ ) = ∑α k ⋅p( x µ ∑ ) max p X | max | , Θ Θ ∏ (3.16) j= 1 k= 1 Taking the log of the likelihood is often a good approach as it simplifies the computation. Using the fact that the optimum * log f x , one can compute: x of a function f ( x) is also an optimum of ( ) ( Θ ) = p( X Θ) max p X | max log | Θ Θ ⎛⎛ max log | , max log | , Θ Θ j= 1 k= 1 j= 1 ⎝⎝ k= 1 M K M K j k k j k k ∏∑α k ⋅p( x µ ∑ ) = ∑ ⎜⎜∑α k ⋅p( x µ ∑ ) ⎞⎞ ⎟⎟ ⎠⎠ (3.17) The log of a sum is difficult to compute and one is led to proceed iteratively by calculating an approximation at each step (EM), see Sections 9.4.2.2 The final update procedure runs as follows: Initialization of the parameters: Initialize all parameters to a value to start with. The p 1 ,.., K p can for instance be initialized with a uniform prior, while the means can be initialized by running K-Means first. The complete set of parameters is then given by: © A.G.Billard 2004 – Last Update March 2011

52<br />

The estimation of Mixture of Gaussians we reviewed in the previous section assumed a diagonal<br />

covariance matrix for each Gaussian. This is rather restrictive as it forces the Gaussians to have<br />

their axes aligned with the original axes of the data, see Figure 3-14. As a result, they are illsuited<br />

for estimating highly non-linear densities that have local correlations.<br />

Figure 3-14: A spherical Mixture of Gaussian Model can only fit Gaussians whose axes are<br />

aligned with the data axes. It may require far more local models to account for local correlations<br />

across the dimensions of the data X.<br />

Figure 3-15: In contrast, a Gaussian Mixture Model can exploit local correlations and adapt the<br />

covariance matrix of each Gaussian so that it aligns with the local direction of correlation.<br />

Gaussian Mixture Model is a generic method that fits a mixture of Gaussians with full covariance<br />

matrices. Each Gaussian serves as a local linear model and in effect performs a local PCA on the<br />

data. By mixing the effect of all these local models, one can obtain a very complex density<br />

estimate, see as an illustration the schematics in Figure 3-14 and Figure 3-15.<br />

j<br />

j=<br />

Let suppose that we have at our disposal { } 1,...<br />

X<br />

i<br />

M<br />

= x the set of M N-dimensional data points<br />

i=<br />

1,... N<br />

whose density we wish to estimate. We consider that these are observed instances of the<br />

variable x . The probability distribution function or density of x is further assumed to be<br />

k<br />

k<br />

composed of a mixture of k = 1,.., K Gaussians with mean ∑ , i.e.<br />

µ and covariance matrix<br />

K<br />

k k k k k k<br />

( ) = α<br />

k<br />

⋅<br />

k( | µ , ∑ ), with<br />

k( | µ , ∑ ) = ( µ , ∑ )<br />

∑ (3.14)<br />

p x p x p x N<br />

k=<br />

1<br />

© A.G.Billard 2004 – Last Update March 2011

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!