MACHINE LEARNING TECHNIQUES - LASA
MACHINE LEARNING TECHNIQUES - LASA
MACHINE LEARNING TECHNIQUES - LASA
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
38<br />
In certain applications, however, it may be desired to use a symmetric decorrelation, in which no<br />
vectors are ``privileged'' over others. This can be accomplished, e.g., by the classical method<br />
involving matrix square roots,<br />
Let<br />
−<br />
T<br />
( ) 1/2<br />
W WW W<br />
= (2.33)<br />
WW − is<br />
where W is the matrix ( ,..., T<br />
w1 wq<br />
) of the vectors, and the inverse square root ( ) 1<br />
T 2<br />
obtained from the eigenvalue decomposition of<br />
simpler alternative is the following iterative algorithm:<br />
WW<br />
T<br />
1<br />
−<br />
T<br />
T<br />
= FΛ F as<br />
2<br />
( )<br />
1<br />
−<br />
2 T<br />
= Λ . A<br />
WW F F<br />
1. Let W = W / WW<br />
Repeat 2. until convergence<br />
3 3 T<br />
2. Let W = W − WW W<br />
2 2<br />
T<br />
(2.34)<br />
The norm in step 1 can be almost any ordinary matrix norm, e.g., the 2-norm or the largest<br />
absolute row (or column) sum (but not the Frobenius norm).<br />
Figure 2-12: (Left) Original distribution; (right) decorrelated distribution after projection through ICA<br />
projection. The original axes and the ICA projected axes are the horizontal and vertical axes respectively.<br />
2.4 Further Readings<br />
In this chapter, we have focused only on the linear version of ICA, and, on one method for solving<br />
ICA, namely fast ICA. Note that there exist also methods for non-linear ICA and for timedependent<br />
ICA. The reader can refer to [Hyvarien et al, 2003] for further readings on ICA and its<br />
applications.<br />
© A.G.Billard 2004 – Last Update March 2011