MACHINE LEARNING TECHNIQUES - LASA
MACHINE LEARNING TECHNIQUES - LASA MACHINE LEARNING TECHNIQUES - LASA
80 1 X2 0.5 0 -0.5 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 1.5 Eigenvector=1; Eigenvalue=0.246 X1 1 X2 0.5 0 -0.5 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 1.5 Eigenvector=2; Eigenvalue=0.232 X1 1 X2 0.5 0 -0.5 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 1.5 Eigenvector=1; Eigenvalue=0.160 X1 1 X2 0.5 0 -0.5 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 1.5 Eigenvector=2; Eigenvalue=0.149 X1 Figure 5-2: Example of clustering done with kernel PCA with a Gaussian Kernel and kernel widthσ = 0.1 (top) and σ = 0.04 (bottom). Reconstruction in the original dataspace of the projections onto the first two eigenvectors. The contour lines represent regions with equal projection value. © A.G.Billard 2004 – Last Update March 2011
81 5.4 Kernel CCA The linear version of CCA was treated in Section 2.2. Here we consider its extension to nonlinear projections to find each pair of eigenvectors. We start with a brief recall of CCA. M i N i q Consider a pair of multivariate datasets X = { x ∈ } , Y =∈{ y ∈ } i i M i i measure a sample of M instances ( x , y ) vectors and x y { } i= 1 ° ° of which we M = 1 = 1 . CCA consists in determining a set of projection w w for X and Y such that the correlation ρ between the projections T T X' = w X and Y' = w Y (the canonical variates) is maximized: x y T { } T T wxE XY wy wxCxywy ρ = max corr ( X',Y' ) = max = max (5.15) wx, wy w X w Y w C w w C w w , T , T T x w T y wx wy x y x xx x y yy y Where Cxy, Cxx, Cxy are respectively the inter-set and within sets covariance matrices. C is N× q, C is N× N and C q× N. xy xx xy Non-linear Case: Kernel CCA extends this notion to non-linear projections. As for kernel PCA, let us assume that both sets of data have been projected into a feature space through a non-linear map φ , φ , such i that we have now the two sets φx( x ) M i { } and { φy( y )} M i= 1 i= 1 M M . Let us further assume that the i i data are centered in feature space, i.e. ∑φx( x ) = 0 and ∑ φy( y ) = 0 (if the data are not i= 1 i= 1 centered in feature space, one can find a Gram matrix that ensures that these are centered, as done for kernel PCA, see exercise session). Kernel canonical correlation analysis aims at maximizing the correlation between the data in their corresponding projection space. Similarly to T kernel PCA, we can construct the two kernel matrices , F and F are two M M x i projections φx( x ) y M i { } and { φy( y )} x K = F F K = F F , where T x x x y y y × matrices, whose columns are composed of the M i= 1 i= 1 y , respectively. The weights w , w can be expressed as a linear combination of the training examples in feature space, i.e. following optimization: w = Fα and w = Fα . Substituting into the equation for linear CCA yields the x x x y y y α KKα max ρ = max (5.16) α , α α , α x y x y x x y y 1/2 1/2 2 2 ( αxKxαx) ( αyKyαy) x y © A.G.Billard 2004 – Last Update March 2011
- Page 29 and 30: 29 Figure 2-6: Mixture of variables
- Page 31 and 32: 31 2.3.2 Why Gaussian variables are
- Page 33 and 34: 33 • In our general definition of
- Page 35 and 36: 35 2.3.5 ICA Ambiguities We cannot
- Page 37 and 38: 37 Denote by g the derivative of th
- Page 39 and 40: 39 3 Clustering and Classification
- Page 41 and 42: 41 An agglomerative clustering star
- Page 43 and 44: 43 3.1.1.1 The CURE Clustering Algo
- Page 45 and 46: 45 Disadvantages of hierarchical cl
- Page 47 and 48: 47 Cases where K-means might be vie
- Page 49 and 50: 49 3.1.4 Clustering with Mixtures o
- Page 51 and 52: 51 k ( σ j ) 2 = k ∑ i α = r k
- Page 53 and 54: 53 Theα are the so-called mixing c
- Page 55 and 56: 55 Figure 3-16: Clustering with 3 G
- Page 57 and 58: 57 When the transformation A is lin
- Page 59 and 60: 59 C: X → Y ( ) C x K = arg max
- Page 61 and 62: 61 Figure 3-18: Linear combination
- Page 63 and 64: 63 Figure 3-19: Bayes classificatio
- Page 65 and 66: 65 ⎛⎛ min ⎜⎜ w ⎝⎝ N i=
- Page 67 and 68: 67 T ( yi − xi w) 2 M ⎛⎛ ⎞
- Page 69 and 70: 69 Figure 4-2: Illustration of the
- Page 71 and 72: 71 4.4.2 Multi-Gaussian Case It is
- Page 73 and 74: 73 5 Kernel Methods These lecture n
- Page 75 and 76: 75 The kernel k provides a metric o
- Page 77 and 78: 77 M 1 T v = ∑ x ( x ) v M λ i j
- Page 79: 79 1 M The solutions to the dual ei
- Page 83 and 84: 83 additional ridge parameter induc
- Page 85 and 86: 85 Figure 5-3: TOP: Marginal (left)
- Page 87 and 88: 87 statistical independence. We def
- Page 89 and 90: 89 J j ( µ 1,...., µ K) = ∑∑
- Page 91 and 92: 91 A simple pattern recognition alg
- Page 93 and 94: 93 ( ) ( , ) f x = sign w x + b (5.
- Page 95 and 96: 95 Figure 5-6: A binary classificat
- Page 97 and 98: 97 where N is the number of support
- Page 99 and 100: 99 5.8 Support Vector Regression In
- Page 101 and 102: 101 The optimization problem given
- Page 103 and 104: 103 Note that since we never have t
- Page 105 and 106: 105 Figure 5-13: Effect of the kern
- Page 107 and 108: 107 To better understand the effect
- Page 109 and 110: 109 5.9 Gaussian Process Regression
- Page 111 and 112: 111 One can then use the above expr
- Page 113 and 114: 113 5.9.2 Equivalence of Gaussian P
- Page 115 and 116: 115 5.9.3 Curse of dimensionality,
- Page 117 and 118: 117 The weight w determines the slo
- Page 119 and 120: 119 Figure 5-21: Example of success
- Page 121 and 122: 121 • its performance tends to de
- Page 123 and 124: 123 neurons. Furthermore, they lear
- Page 125 and 126: 125 The sigmoid f x ( x) ( ) = tanh
- Page 127 and 128: 127 6.3.2 Information Theory and th
- Page 129 and 130: 129 ( R) ⎛⎛det ⎞⎞ I( x, y)
80<br />
1<br />
X2<br />
0.5<br />
0<br />
-0.5<br />
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1<br />
1.5 Eigenvector=1; Eigenvalue=0.246 X1<br />
1<br />
X2<br />
0.5<br />
0<br />
-0.5<br />
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1<br />
1.5 Eigenvector=2; Eigenvalue=0.232 X1<br />
1<br />
X2<br />
0.5<br />
0<br />
-0.5<br />
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1<br />
1.5 Eigenvector=1; Eigenvalue=0.160 X1<br />
1<br />
X2<br />
0.5<br />
0<br />
-0.5<br />
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1<br />
1.5 Eigenvector=2; Eigenvalue=0.149 X1<br />
Figure 5-2: Example of clustering done with kernel PCA with a Gaussian Kernel and kernel<br />
widthσ = 0.1 (top) and σ = 0.04 (bottom). Reconstruction in the original dataspace of the projections onto<br />
the first two eigenvectors. The contour lines represent regions with equal projection value.<br />
© A.G.Billard 2004 – Last Update March 2011