MACHINE LEARNING TECHNIQUES - LASA
MACHINE LEARNING TECHNIQUES - LASA MACHINE LEARNING TECHNIQUES - LASA
64 4 Regression Techniques There is a growing interest in machine learning to design powerful algorithms for performing nonlinear regression. We will here consider a few of these. The principle behind the technique we will present here is the same as the one used in most other variants we find in the literature and hence this offers a good background for an interested reader. Consider a multi-dimensional (zero-mean) variable N x∈° and a one-dimensional variable y ∈ ° , regression techniques aim at approximating a relationship f between y and x by building a model of the form: y= f ( x) (4.1) 4.1 Linear Regression The most classical technique that the reader will probably be familiar with is the linear regressive model, whereby one assumes that f is a linear function parametrized by ( , ) T y f x w x w w∈ ° N , that is: = = (4.2) For a given instance of the pair x and y one can solve explicitly for w . Consider now the case i where we are provided with a set of M observed instances X = { x } and { } i Y y i= 1 i= 1 M M = of the variables x and y such that the observation of y has been corrupted by some noise which may or not be a function of x, i.e. ε ( x) : ( ) T y x w ε x = + (4.3) Classical means to estimate the parameters w is through mean-square, which we review next. 4.2 Partial Least Square Methods Adapted from R. Rosipal and N. Kramer, Overview and Recent Advances in Partial Least Squares, C. Saunders et al. (Eds.): SLSFS 2005, LNCS 3940, pp. 34–51, 2006. Partial Least Squares (PLS) refer to a wide class of methods for modeling relations between sets of observed variables by means of latent variables. It comprises regression and classification tasks as well as dimension reduction techniques and modeling tools. As the name implies, PLS is i based on least-square regression. Consider a set of M pairs of variables { } i= 1 M i Y = { y } , least-square regression looks for a mapping w that sends X onto Y, such that: i= 1 . X M = x and © A.G.Billard 2004 – Last Update March 2011
65 ⎛⎛ min ⎜⎜ w ⎝⎝ N i= 1 1 2 ( ) 2 i i xw− y ⎞⎞ ⎟⎟ ⎠⎠ ∑ (4.4) Note that each variable pair must have the same dimension. This is hence a particular case of regression. PCA and CCA by extension can be viewed as regression problems, whereby one set of variable Y can be expressed in terms of a linear combination of the second set of variable X. The primary problem with PCA Regression is that PCA does not take into account the response variable when constructing the principal components or latent variables. Thus even for easy classification problems such as that shown in Figure 4-1, the method may select poor latent variables. PLS incorporates information about the response in the model by using latent variables. Figure 4-1: LEFT: Line is direction of maximum variance (w) constructed by PCA. RIGHT: Line is direction w constructed by PLS. The underlying assumption of all PLS methods is that the observed data is generated by a system or process which is driven by a small number of latent (not directly observed or measured) variables. In its general form PLS creates orthogonal score vectors (also called latent vectors or components) by maximizing the covariance between different sets of variables. In this respect, PLS is similar to CCA in its principle. It however differs in the algorithm. The connections between PCA, CCA and PLS can be seen through the optimization criterion they use to define projection directions. PCA projects the original variables onto a direction of maximal variance called principal direction. Similarly, CCA finds the direction of maximal correlation across two sets of variables. PLS represents a form of CCA, where the criterion of maximal correlation is balanced with the requirement to explain as much variance as possible in both X and Y spaces. Formally, CCA and PLS can be viewed as finding the solution to the following optimization problem. For two sets of variables X and Y, find the set of vectors w , w that maximizes the following quantity: ForγX γY 0 PLS is found. max 2 cov ( Xwx, Ywy) ([ 1 ] var( x) ) [ 1 ] var( y) ( ) γX Xw γX γY Yw γY wx = wy = 1 − + − + = = , the above optimization leads to CCA. While forγX = γY = 1, the solution to x y (4.5) © A.G.Billard 2004 – Last Update March 2011
- Page 13 and 14: 13 1.3.2 Crossvalidation To ensure
- Page 15 and 16: 15 In particular, we will consider
- Page 17 and 18: 17 2.1 Principal Component Analysis
- Page 19 and 20: 19 ( ) Xʹ′ = W X − µ (2.6) i
- Page 21 and 22: 21 2.1.2.2 Reconstruction error min
- Page 23 and 24: 23 PCA is an example of PP approach
- Page 25 and 26: 25 Algorithm: If one further assume
- Page 27 and 28: 27 The CCA algorithm consists thus
- Page 29 and 30: 29 Figure 2-6: Mixture of variables
- Page 31 and 32: 31 2.3.2 Why Gaussian variables are
- Page 33 and 34: 33 • In our general definition of
- Page 35 and 36: 35 2.3.5 ICA Ambiguities We cannot
- Page 37 and 38: 37 Denote by g the derivative of th
- Page 39 and 40: 39 3 Clustering and Classification
- Page 41 and 42: 41 An agglomerative clustering star
- Page 43 and 44: 43 3.1.1.1 The CURE Clustering Algo
- Page 45 and 46: 45 Disadvantages of hierarchical cl
- Page 47 and 48: 47 Cases where K-means might be vie
- Page 49 and 50: 49 3.1.4 Clustering with Mixtures o
- Page 51 and 52: 51 k ( σ j ) 2 = k ∑ i α = r k
- Page 53 and 54: 53 Theα are the so-called mixing c
- Page 55 and 56: 55 Figure 3-16: Clustering with 3 G
- Page 57 and 58: 57 When the transformation A is lin
- Page 59 and 60: 59 C: X → Y ( ) C x K = arg max
- Page 61 and 62: 61 Figure 3-18: Linear combination
- Page 63: 63 Figure 3-19: Bayes classificatio
- Page 67 and 68: 67 T ( yi − xi w) 2 M ⎛⎛ ⎞
- Page 69 and 70: 69 Figure 4-2: Illustration of the
- Page 71 and 72: 71 4.4.2 Multi-Gaussian Case It is
- Page 73 and 74: 73 5 Kernel Methods These lecture n
- Page 75 and 76: 75 The kernel k provides a metric o
- Page 77 and 78: 77 M 1 T v = ∑ x ( x ) v M λ i j
- Page 79 and 80: 79 1 M The solutions to the dual ei
- Page 81 and 82: 81 5.4 Kernel CCA The linear versio
- Page 83 and 84: 83 additional ridge parameter induc
- Page 85 and 86: 85 Figure 5-3: TOP: Marginal (left)
- Page 87 and 88: 87 statistical independence. We def
- Page 89 and 90: 89 J j ( µ 1,...., µ K) = ∑∑
- Page 91 and 92: 91 A simple pattern recognition alg
- Page 93 and 94: 93 ( ) ( , ) f x = sign w x + b (5.
- Page 95 and 96: 95 Figure 5-6: A binary classificat
- Page 97 and 98: 97 where N is the number of support
- Page 99 and 100: 99 5.8 Support Vector Regression In
- Page 101 and 102: 101 The optimization problem given
- Page 103 and 104: 103 Note that since we never have t
- Page 105 and 106: 105 Figure 5-13: Effect of the kern
- Page 107 and 108: 107 To better understand the effect
- Page 109 and 110: 109 5.9 Gaussian Process Regression
- Page 111 and 112: 111 One can then use the above expr
- Page 113 and 114: 113 5.9.2 Equivalence of Gaussian P
65<br />
⎛⎛<br />
min ⎜⎜<br />
w<br />
⎝⎝<br />
N<br />
i=<br />
1<br />
1<br />
2<br />
( ) 2<br />
i<br />
i<br />
xw−<br />
y<br />
⎞⎞<br />
⎟⎟<br />
⎠⎠<br />
∑ (4.4)<br />
Note that each variable pair must have the same dimension. This is hence a particular case of<br />
regression.<br />
PCA and CCA by extension can be viewed as regression problems, whereby one set of variable<br />
Y can be expressed in terms of a linear combination of the second set of variable X.<br />
The primary problem with PCA Regression is that PCA does not take into account the response<br />
variable when constructing the principal components or latent variables. Thus even for easy<br />
classification problems such as that shown in Figure 4-1, the method may select poor latent<br />
variables. PLS incorporates information about the response in the model by using latent<br />
variables.<br />
Figure 4-1: LEFT: Line is direction of maximum variance (w) constructed by PCA. RIGHT:<br />
Line is direction w constructed by PLS.<br />
The underlying assumption of all PLS methods is that the observed data is generated by a<br />
system or process which is driven by a small number of latent (not directly observed or<br />
measured) variables. In its general form PLS creates orthogonal score vectors (also called latent<br />
vectors or components) by maximizing the covariance between different sets of variables. In this<br />
respect, PLS is similar to CCA in its principle. It however differs in the algorithm.<br />
The connections between PCA, CCA and PLS can be seen through the optimization criterion they<br />
use to define projection directions. PCA projects the original variables onto a direction of maximal<br />
variance called principal direction. Similarly, CCA finds the direction of maximal correlation across<br />
two sets of variables. PLS represents a form of CCA, where the criterion of maximal correlation is<br />
balanced with the requirement to explain as much variance as possible in both X and Y spaces.<br />
Formally, CCA and PLS can be viewed as finding the solution to the following optimization<br />
problem. For two sets of variables X and Y, find the set of vectors w , w that maximizes the<br />
following quantity:<br />
ForγX<br />
γY<br />
0<br />
PLS is found.<br />
max<br />
2<br />
cov ( Xwx,<br />
Ywy)<br />
([ 1 ] var( x)<br />
) [ 1 ] var( y)<br />
( )<br />
γX Xw γX γY Yw γY<br />
wx<br />
= wy<br />
= 1<br />
− + − +<br />
= = , the above optimization leads to CCA. While forγX<br />
= γY<br />
= 1, the solution to<br />
x<br />
y<br />
(4.5)<br />
© A.G.Billard 2004 – Last Update March 2011