01.11.2014 Views

MACHINE LEARNING TECHNIQUES - LASA

MACHINE LEARNING TECHNIQUES - LASA

MACHINE LEARNING TECHNIQUES - LASA

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

99<br />

5.8 Support Vector Regression<br />

In SVM, we have consider a mapping from the input data X onto a binary output y =± 1. Support<br />

Vector regression (SVR) extends the principle of SVM to allow a mapping f to a real value<br />

output:<br />

f : X → °<br />

( )<br />

x→ y = f x<br />

(5.58)<br />

SVR starts from the standard linear regression model and applies it in feature space. Assume a<br />

X →φ<br />

X ∈ H. Then, SVR seeks a linear mapping in<br />

projection of the data into a feature space ( )<br />

feature space of the form:<br />

( ) φ ( )<br />

f x = w, x + b, w∈H,<br />

b∈° (5.59)<br />

To recall, SVM approximates the classification problem by choosing a subset of data points, the<br />

support vectors, to support the decision function. This sparsification of the training dataset is the<br />

key strength of the algorithm. When considering non-separable datasets, SVM introduced a slack<br />

variable to give room for imprecise classification. SVR proceeds similarly and tries to find the<br />

optimal number of support vector while allowing for imprecise fitting. The allowed imprecision of<br />

the fitting through f is measured by a parameter ε ≥ 0 and is measured through an e -loss<br />

function:<br />

{ ε}<br />

( ) ( )<br />

y− f x = max 0, y− f x − ,<br />

(5.60)<br />

ε<br />

Points with a non-zero e -loss function lie outside the e-insensitive tube that surrounds the<br />

function f , see Figure 5-10.<br />

Figure 5-10: Effect of a non-linear regression through SVR. The tightness of the e-insensitive tube around<br />

the regression signal varies along the state space as an effect of the distance in feature space between the<br />

support vectors (the support vector are plain circles). Datapoints within the e-insensitive tube do not<br />

influence the regression model and are indicated with un-filled circles.<br />

© A.G.Billard 2004 – Last Update March 2011

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!