01.11.2014 Views

MACHINE LEARNING TECHNIQUES - LASA

MACHINE LEARNING TECHNIQUES - LASA

MACHINE LEARNING TECHNIQUES - LASA

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

85<br />

Figure 5-3: TOP: Marginal (left) and joint (right) distributions of two statistically independent sources, a<br />

Gaussian distribution and a uniform distribution. BOTTOM: The marginal distributions after whitening are<br />

closer to that of a Gaussian distribution.<br />

ICA by minimization of mutual information<br />

ICA searches statistically independent sources. These sources must therefore have minimal<br />

mutual information. A measure of the mutual information across the q sources is given by:<br />

q<br />

1<br />

( s<br />

1,...<br />

q) = ( i) − ( ) −log det ( )<br />

∑ (5.20)<br />

I s h s h x A −<br />

where h( x)<br />

is the entropy of the distribution of the observations.<br />

i=<br />

1<br />

To recall, ICA started with the assumption that the data was centered and white, i.e. ~ ( 0, )<br />

x N I .<br />

In practice, this requires to first substract the mean of the data and then to proceed to a<br />

decorrelation through PCA, followed by a normalization, see Section 2.3.4. By extension if<br />

x~ N 0, I then the sources are also centered and white and hence:<br />

( )<br />

T − T −<br />

{ } { }( )<br />

I = E ss = A E xx A<br />

(5.21)<br />

1 1 T<br />

Given that ( I )<br />

det =1<br />

, we have:<br />

−1 T −1<br />

T<br />

( A E{ xx }( A ) )<br />

−<br />

T<br />

( ) { }<br />

det =1<br />

⇔<br />

T<br />

( )<br />

−<br />

( ) ( )<br />

1 1<br />

det A det E xx det A =1.<br />

(5.22)<br />

© A.G.Billard 2004 – Last Update March 2011

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!