01.11.2014 Views

MACHINE LEARNING TECHNIQUES - LASA

MACHINE LEARNING TECHNIQUES - LASA

MACHINE LEARNING TECHNIQUES - LASA

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

77<br />

M<br />

1<br />

T<br />

v = ∑ x ( x ) v<br />

M λ<br />

i j j i<br />

1<br />

=<br />

M λ<br />

i<br />

i<br />

j=<br />

1<br />

M<br />

T<br />

∑( ( ) )<br />

j=<br />

1<br />

x v x<br />

j i j<br />

(5.6)<br />

T<br />

( ( ) )<br />

i j i j<br />

Introducing a set of scalars α = x v for each data point x (these correspond to the value of the<br />

j<br />

j<br />

i<br />

projection of x onto the eigenvector vector v .<br />

i<br />

i<br />

Then each eigenvector v with non-zero eigenvalue λ lies in the space spanned<br />

1 M<br />

by the input vectors x ,...., x :<br />

1<br />

v = ∑ .<br />

(5.7)<br />

M<br />

i i j<br />

α<br />

i<br />

j x<br />

Mλ j = 1<br />

Assuming now that the data are projected into a feature space through a non-linear mapφ and<br />

that these are centered in feature space, i.e.<br />

M<br />

i=<br />

1<br />

i<br />

( x )<br />

the Correlation matrix in the feature space is then given by:<br />

∑ φ = 0,<br />

(5.8)<br />

where each i 1,...,<br />

C<br />

1 T<br />

M<br />

FF<br />

φ = (5.9)<br />

= M columns of F are composed of the projections ( x<br />

i<br />

)<br />

φ .<br />

© A.G.Billard 2004 – Last Update March 2011

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!