01.11.2014 Views

MACHINE LEARNING TECHNIQUES - LASA

MACHINE LEARNING TECHNIQUES - LASA

MACHINE LEARNING TECHNIQUES - LASA

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

103<br />

Note that since we never have the projections ( x<br />

i<br />

)<br />

We can however once more exploit the kernel trick to compute f . Using<br />

M<br />

φ , we cannot compute explicitly w in (5.71).<br />

*<br />

i<br />

( ) = ∑( i<br />

− i ) ( ) ( )<br />

w,<br />

φ x α α φ x φ x<br />

i=<br />

1<br />

M<br />

*<br />

i<br />

∑( αi<br />

αi<br />

) k( x x)<br />

= −<br />

i=<br />

1<br />

, ,<br />

we can compute an estimate of our regression function through:<br />

M<br />

*<br />

i<br />

( ) ( αi<br />

αi<br />

) ( , )<br />

f x = ∑ − k x x + b<br />

(5.73)<br />

i=<br />

1<br />

Observe that the computation costs for the regression function grow linearly with the number<br />

M of datapoints. As for SVM, computation can be reduced importantly if one reduces the number<br />

of datapoints for which the elements in the sum are non-zero, the so-called support vectors. This<br />

*<br />

is the case when ( αi<br />

αi<br />

)<br />

*<br />

− = 0. Note that for all points within the e-insensitive tube α = α = 0 (so<br />

as to satisfy the KKT conditions given in (5.72).<br />

i<br />

i<br />

© A.G.Billard 2004 – Last Update March 2011

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!