MACHINE LEARNING TECHNIQUES - LASA

MACHINE LEARNING TECHNIQUES - LASA MACHINE LEARNING TECHNIQUES - LASA

01.11.2014 Views

64 4 Regression Techniques There is a growing interest in machine learning to design powerful algorithms for performing nonlinear regression. We will here consider a few of these. The principle behind the technique we will present here is the same as the one used in most other variants we find in the literature and hence this offers a good background for an interested reader. Consider a multi-dimensional (zero-mean) variable N x∈° and a one-dimensional variable y ∈ ° , regression techniques aim at approximating a relationship f between y and x by building a model of the form: y= f ( x) (4.1) 4.1 Linear Regression The most classical technique that the reader will probably be familiar with is the linear regressive model, whereby one assumes that f is a linear function parametrized by ( , ) T y f x w x w w∈ ° N , that is: = = (4.2) For a given instance of the pair x and y one can solve explicitly for w . Consider now the case i where we are provided with a set of M observed instances X = { x } and { } i Y y i= 1 i= 1 M M = of the variables x and y such that the observation of y has been corrupted by some noise which may or not be a function of x, i.e. ε ( x) : ( ) T y x w ε x = + (4.3) Classical means to estimate the parameters w is through mean-square, which we review next. 4.2 Partial Least Square Methods Adapted from R. Rosipal and N. Kramer, Overview and Recent Advances in Partial Least Squares, C. Saunders et al. (Eds.): SLSFS 2005, LNCS 3940, pp. 34–51, 2006. Partial Least Squares (PLS) refer to a wide class of methods for modeling relations between sets of observed variables by means of latent variables. It comprises regression and classification tasks as well as dimension reduction techniques and modeling tools. As the name implies, PLS is i based on least-square regression. Consider a set of M pairs of variables { } i= 1 M i Y = { y } , least-square regression looks for a mapping w that sends X onto Y, such that: i= 1 . X M = x and © A.G.Billard 2004 – Last Update March 2011

65 ⎛⎛ min ⎜⎜ w ⎝⎝ N i= 1 1 2 ( ) 2 i i xw− y ⎞⎞ ⎟⎟ ⎠⎠ ∑ (4.4) Note that each variable pair must have the same dimension. This is hence a particular case of regression. PCA and CCA by extension can be viewed as regression problems, whereby one set of variable Y can be expressed in terms of a linear combination of the second set of variable X. The primary problem with PCA Regression is that PCA does not take into account the response variable when constructing the principal components or latent variables. Thus even for easy classification problems such as that shown in Figure 4-1, the method may select poor latent variables. PLS incorporates information about the response in the model by using latent variables. Figure 4-1: LEFT: Line is direction of maximum variance (w) constructed by PCA. RIGHT: Line is direction w constructed by PLS. The underlying assumption of all PLS methods is that the observed data is generated by a system or process which is driven by a small number of latent (not directly observed or measured) variables. In its general form PLS creates orthogonal score vectors (also called latent vectors or components) by maximizing the covariance between different sets of variables. In this respect, PLS is similar to CCA in its principle. It however differs in the algorithm. The connections between PCA, CCA and PLS can be seen through the optimization criterion they use to define projection directions. PCA projects the original variables onto a direction of maximal variance called principal direction. Similarly, CCA finds the direction of maximal correlation across two sets of variables. PLS represents a form of CCA, where the criterion of maximal correlation is balanced with the requirement to explain as much variance as possible in both X and Y spaces. Formally, CCA and PLS can be viewed as finding the solution to the following optimization problem. For two sets of variables X and Y, find the set of vectors w , w that maximizes the following quantity: ForγX γY 0 PLS is found. max 2 cov ( Xwx, Ywy) ([ 1 ] var( x) ) [ 1 ] var( y) ( ) γX Xw γX γY Yw γY wx = wy = 1 − + − + = = , the above optimization leads to CCA. While forγX = γY = 1, the solution to x y (4.5) © A.G.Billard 2004 – Last Update March 2011

65<br />

⎛⎛<br />

min ⎜⎜<br />

w<br />

⎝⎝<br />

N<br />

i=<br />

1<br />

1<br />

2<br />

( ) 2<br />

i<br />

i<br />

xw−<br />

y<br />

⎞⎞<br />

⎟⎟<br />

⎠⎠<br />

∑ (4.4)<br />

Note that each variable pair must have the same dimension. This is hence a particular case of<br />

regression.<br />

PCA and CCA by extension can be viewed as regression problems, whereby one set of variable<br />

Y can be expressed in terms of a linear combination of the second set of variable X.<br />

The primary problem with PCA Regression is that PCA does not take into account the response<br />

variable when constructing the principal components or latent variables. Thus even for easy<br />

classification problems such as that shown in Figure 4-1, the method may select poor latent<br />

variables. PLS incorporates information about the response in the model by using latent<br />

variables.<br />

Figure 4-1: LEFT: Line is direction of maximum variance (w) constructed by PCA. RIGHT:<br />

Line is direction w constructed by PLS.<br />

The underlying assumption of all PLS methods is that the observed data is generated by a<br />

system or process which is driven by a small number of latent (not directly observed or<br />

measured) variables. In its general form PLS creates orthogonal score vectors (also called latent<br />

vectors or components) by maximizing the covariance between different sets of variables. In this<br />

respect, PLS is similar to CCA in its principle. It however differs in the algorithm.<br />

The connections between PCA, CCA and PLS can be seen through the optimization criterion they<br />

use to define projection directions. PCA projects the original variables onto a direction of maximal<br />

variance called principal direction. Similarly, CCA finds the direction of maximal correlation across<br />

two sets of variables. PLS represents a form of CCA, where the criterion of maximal correlation is<br />

balanced with the requirement to explain as much variance as possible in both X and Y spaces.<br />

Formally, CCA and PLS can be viewed as finding the solution to the following optimization<br />

problem. For two sets of variables X and Y, find the set of vectors w , w that maximizes the<br />

following quantity:<br />

ForγX<br />

γY<br />

0<br />

PLS is found.<br />

max<br />

2<br />

cov ( Xwx,<br />

Ywy)<br />

([ 1 ] var( x)<br />

) [ 1 ] var( y)<br />

( )<br />

γX Xw γX γY Yw γY<br />

wx<br />

= wy<br />

= 1<br />

− + − +<br />

= = , the above optimization leads to CCA. While forγX<br />

= γY<br />

= 1, the solution to<br />

x<br />

y<br />

(4.5)<br />

© A.G.Billard 2004 – Last Update March 2011

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!