01.11.2014 Views

MACHINE LEARNING TECHNIQUES - LASA

MACHINE LEARNING TECHNIQUES - LASA

MACHINE LEARNING TECHNIQUES - LASA

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

104<br />

Examples:<br />

Figure 5-12 illustrates the application of SVR on the same dataset (these data were generated<br />

using MLdemos). When using a linear kernel with a large e, one can completely encapsulate the<br />

datapoints, while still yielding a very poor fit. Using a Gaussian kernel allows to fit better the nonlinearities.<br />

This is however very sensitive to the choice of penalty given to poor fit through<br />

parameter C. A high penalty will result in a smoother fit.<br />

Figure 5-12: Example of e-SVR on a two-dimensional datasets. Datapoints are shown in plain dot. The<br />

regression signal is in solid line. The boundaries around the e-insensitite tube are drawn in light grey lines.<br />

using a linear kernel (top) and a Gaussian kernel (bottom). TOP: effect of the increasing the e-tube with<br />

e=0.02 (left) and e=0.1 (right). BOTTOM: effect of increasing parameter C (from left to right, C=10, 100 and<br />

1000) that penalies for poorly fit datapoints.<br />

SVR as all other regression techniques remain very sensitive also the choice of hyperparameter<br />

for the kernel. This is illustrated in Figure 5-13. Too small a kernel width will lead to overfitting of<br />

the data. Too large may lead to very poor performance. The latter effect can be compensated by<br />

choosing appropriate value for the hyperparameters of the optimization function, namely C and e,<br />

see Figure 5-14. The small kernel width that had led to a poor fit in Figure 5-13 is compensated<br />

by relaxing the constraints using a large e and smaller C. We also see that reducing the<br />

constraints and widening e decreases the number of support vectors. While the very tight fit in<br />

Figure 5-13 used almost all datapoints as support vector, the looser fit in Figure 5-14 used much<br />

less.<br />

© A.G.Billard 2004 – Last Update March 2011

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!