01.11.2014 Views

MACHINE LEARNING TECHNIQUES - LASA

MACHINE LEARNING TECHNIQUES - LASA

MACHINE LEARNING TECHNIQUES - LASA

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

107<br />

To better understand the effect that n has on the optimization observe first that if ν ≥ 1 then the<br />

second and third conditions in (5.75) are redundant and hence all values of n greater than 1 will<br />

have the same effect on the optimization. One should then work solely with values ranging<br />

between 0≤ν<br />

≤ 1.<br />

Interestingly, a number of properties can be derived:<br />

(*)<br />

• n is an upper bound on the fraction of error (i.e. the proportion of datapoints with ξ > 0),<br />

• n is a lower bound on the fraction of support vectors.<br />

These two properties are illustrated in Figure 5-16.<br />

In summary, n-SVR is advantageous over e-SVM in that it allows one to automatically adjust the<br />

sensitivity of the algorithm through the automatic computation of e. To some extent, this is<br />

equivalent to fitting a model of the noise on the data (assuming a uniform noise model). This is<br />

illustrated in Figure 5-15.<br />

Figure 5-15: Effect of the automatic adaptation of e using n-SVR. (Top) Data with no noise. (Bottom) Same<br />

dataset with a white noise. In both plots, n-SVR was fitted with C=100, n=0.05, and a Gaussian kernel with<br />

kernel width=0.021.<br />

© A.G.Billard 2004 – Last Update March 2011

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!