MACHINE LEARNING TECHNIQUES - LASA
MACHINE LEARNING TECHNIQUES - LASA MACHINE LEARNING TECHNIQUES - LASA
92 Let us now rewrite this result in terms of the input patterns dot product. We get the decision function: ⎛⎛ 1 1 ⎞⎞ sgn , i i y= ⎜⎜ x x − x, x + b ⎜⎜ ⎟⎟ m+ i= | yi=+ 1 m ⎟⎟ − i= | yi ⎝⎝ =+ 1 ⎠⎠ ⎛⎛ 1 1 ⎞⎞ i i = sgn ⎜⎜ k( x, x ) − k( x, x ) + b ⎜⎜ ⎟⎟ m+ i= | yi=+ 1 m ⎟⎟ − i= | yi ⎝⎝ =+ 1 ⎠⎠ x i , using the kernel k to compute the ∑ ∑ (5.31) ∑ ∑ (5.32) If b = 0 , i.e. the two classes' means are equidistant to the origin, then k can be viewed as a probability density when one of its arguments is fixed. By this, we mean that it is positive and has unit integral, ∫ ( ʹ′ ) k x , x dx= 1 ∀xʹ′∈X X In this case, y takes the form of the so-called Bayes classifier separating the two classes, subject to the assumption that the two classes of patterns were generated by sampling from two probability distributions that are correctly estimated by the Parzen Windows estimator of the two class densities, and where x∈ X . p p + − 1 i : = ∑ k x, x m + iy | i =+ 1 − iy | i =−1 ( ) 1 i : = ∑ k x, x m ( ) Thus, given some point x , the class label is computed by checking which of the two values p + or p − is larger. This is the best decision one can take if one has no prior information on the data distribution. 5.7.1 Support Vector Machine for Linearly Separable Datasets SVM consists of determining a hyperplane that determines a decision boundary in a binary classification problem. We consider first the linear case and then move to the non-linear case. Linear Support Vector Machines Let us assume for a moment that our datapoints x live in a feature space H . The class of hyperplanes in the dot product space H is given by: with the corresponding decision functions wx , + b= 0, where w∈H, b∈° (5.33) © A.G.Billard 2004 – Last Update March 2011
93 ( ) ( , ) f x = sign w x + b (5.34) We can now define a learning algorithm for linearly separable problems. First, observe that among all hyperplanes separating the data, there exists a unique optimal hyperplane, distinguished by the maximum margin of separation between any training point and the hyperplane, defined by: { } i maximizew ∈H, b∈Rmin x−x , x∈ H, w, x + b= 0, i = 1,..., M (5.35) While in the simple classification problem we had presented earlier on, it was sufficient to simply compute the distance between the two cluster’ means to define the normal vector and so the hyperplane, here, the problem of finding the normal vector that leads to the largest margin is slightly more complex. To construct the optimal hyperplane, we have to solve for the objective function τ ( w) : subject to the inequality constraints: minimize 1 2 ∈ ∈R τ ( w) = w (5.36) 2 w H, b i ( ) i y w, x + b ≥1, ∀ i=1,...M (5.37) Consider the points for which the equality in (5.37) holds (requiring that there exists such a point is equivalent to choosing a scale for w and b). These points lie on two hyperplanes i ( + = ) and H2 ( wx , 1) i b H1 wx , b 1 + =− with normal w and perpendicular distance from the origin 1 − b / w . Hence d = d = 1/ w and the margin is simply 2 / w . Note that 1 2 + − H and H are parallel (they have the same normal) and that no training points fall between them. Thus we can find the pair of hyperplanes which gives the maximum margin by minimizing to constraints (5.37) that ensures that the class label for a given for y = −1. i 2 w , subject i i x will be + 1 if y = + 1, and − 1 Let us now rephrase the minimization under constraint problem given by (5.36) and (5.37) in terms of the Lagrange multipliers α i, i= 1,..., l, one for each of the inequality constraints in (5.37). Recall that the rule is that for constraints of the form ci ≥ 0, the constraint equations are multiplied by positive Lagrange multipliers and subtracted from the objective function, in (5.36), to form the Lagrangian. For equality constraints, the Lagrange multipliers are unconstrained. This gives the Lagrangian: We must now minimize of L with respect to all the P l 1 2 i i ( ) ( ) L wb , , α ≡ w − α y wx , + b + α . P i i 2 i= 1 i= 1 l ∑ ∑ (5.38) L with respect to P w , b , and simultaneously require that the derivatives α vanish, all subject to the constraints α ≥ 0. This is a convex i i © A.G.Billard 2004 – Last Update March 2011
- Page 41 and 42: 41 An agglomerative clustering star
- Page 43 and 44: 43 3.1.1.1 The CURE Clustering Algo
- Page 45 and 46: 45 Disadvantages of hierarchical cl
- Page 47 and 48: 47 Cases where K-means might be vie
- Page 49 and 50: 49 3.1.4 Clustering with Mixtures o
- Page 51 and 52: 51 k ( σ j ) 2 = k ∑ i α = r k
- Page 53 and 54: 53 Theα are the so-called mixing c
- Page 55 and 56: 55 Figure 3-16: Clustering with 3 G
- Page 57 and 58: 57 When the transformation A is lin
- Page 59 and 60: 59 C: X → Y ( ) C x K = arg max
- Page 61 and 62: 61 Figure 3-18: Linear combination
- Page 63 and 64: 63 Figure 3-19: Bayes classificatio
- Page 65 and 66: 65 ⎛⎛ min ⎜⎜ w ⎝⎝ N i=
- Page 67 and 68: 67 T ( yi − xi w) 2 M ⎛⎛ ⎞
- Page 69 and 70: 69 Figure 4-2: Illustration of the
- Page 71 and 72: 71 4.4.2 Multi-Gaussian Case It is
- Page 73 and 74: 73 5 Kernel Methods These lecture n
- Page 75 and 76: 75 The kernel k provides a metric o
- Page 77 and 78: 77 M 1 T v = ∑ x ( x ) v M λ i j
- Page 79 and 80: 79 1 M The solutions to the dual ei
- Page 81 and 82: 81 5.4 Kernel CCA The linear versio
- Page 83 and 84: 83 additional ridge parameter induc
- Page 85 and 86: 85 Figure 5-3: TOP: Marginal (left)
- Page 87 and 88: 87 statistical independence. We def
- Page 89 and 90: 89 J j ( µ 1,...., µ K) = ∑∑
- Page 91: 91 A simple pattern recognition alg
- Page 95 and 96: 95 Figure 5-6: A binary classificat
- Page 97 and 98: 97 where N is the number of support
- Page 99 and 100: 99 5.8 Support Vector Regression In
- Page 101 and 102: 101 The optimization problem given
- Page 103 and 104: 103 Note that since we never have t
- Page 105 and 106: 105 Figure 5-13: Effect of the kern
- Page 107 and 108: 107 To better understand the effect
- Page 109 and 110: 109 5.9 Gaussian Process Regression
- Page 111 and 112: 111 One can then use the above expr
- Page 113 and 114: 113 5.9.2 Equivalence of Gaussian P
- Page 115 and 116: 115 5.9.3 Curse of dimensionality,
- Page 117 and 118: 117 The weight w determines the slo
- Page 119 and 120: 119 Figure 5-21: Example of success
- Page 121 and 122: 121 • its performance tends to de
- Page 123 and 124: 123 neurons. Furthermore, they lear
- Page 125 and 126: 125 The sigmoid f x ( x) ( ) = tanh
- Page 127 and 128: 127 6.3.2 Information Theory and th
- Page 129 and 130: 129 ( R) ⎛⎛det ⎞⎞ I( x, y)
- Page 131 and 132: 131 y= ∑ w x ) of Because of the
- Page 133 and 134: 133 6.5 Willshaw net David Willshaw
- Page 135 and 136: 135 6.6.1 Weights bounds One of the
- Page 137 and 138: 137 Figure 6-11: The weight vector
- Page 139 and 140: 139 6.6.4 Oja’s one Neuron Model
- Page 141 and 142: 141 If y i and y are highly correla
92<br />
Let us now rewrite this result in terms of the input patterns<br />
dot product. We get the decision function:<br />
⎛⎛ 1 1<br />
⎞⎞<br />
sgn , i<br />
i<br />
y= ⎜⎜ x x − x,<br />
x + b<br />
⎜⎜<br />
⎟⎟<br />
m+ i= | yi=+ 1<br />
m<br />
⎟⎟<br />
− i= | yi<br />
⎝⎝<br />
=+ 1 ⎠⎠<br />
⎛⎛ 1 1<br />
⎞⎞<br />
i<br />
i<br />
= sgn ⎜⎜ k( x, x ) − k( x,<br />
x ) + b<br />
⎜⎜<br />
⎟⎟<br />
m+ i= | yi=+ 1<br />
m<br />
⎟⎟<br />
− i= | yi<br />
⎝⎝<br />
=+ 1 ⎠⎠<br />
x i , using the kernel k to compute the<br />
∑ ∑ (5.31)<br />
∑ ∑ (5.32)<br />
If b = 0 , i.e. the two classes' means are equidistant to the origin, then k can be viewed as a<br />
probability density when one of its arguments is fixed. By this, we mean that it is positive and has<br />
unit integral,<br />
∫<br />
( ʹ′ )<br />
k x , x dx= 1 ∀xʹ′∈X<br />
X<br />
In this case, y takes the form of the so-called Bayes classifier separating the two classes, subject<br />
to the assumption that the two classes of patterns were generated by sampling from two<br />
probability distributions that are correctly estimated by the Parzen Windows estimator of the two<br />
class densities,<br />
and<br />
where x∈ X .<br />
p<br />
p<br />
+<br />
−<br />
1<br />
i<br />
: = ∑ k x, x<br />
m<br />
+ iy | i =+ 1<br />
− iy | i =−1<br />
( )<br />
1<br />
i<br />
: = ∑ k x, x<br />
m<br />
( )<br />
Thus, given some point x , the class label is computed by checking which of the two values p +<br />
or<br />
p −<br />
is larger. This is the best decision one can take if one has no prior information on the data<br />
distribution.<br />
5.7.1 Support Vector Machine for Linearly Separable Datasets<br />
SVM consists of determining a hyperplane that determines a decision boundary in a binary<br />
classification problem. We consider first the linear case and then move to the non-linear case.<br />
Linear Support Vector Machines<br />
Let us assume for a moment that our datapoints x live in a feature space H . The class of<br />
hyperplanes in the dot product space H is given by:<br />
with the corresponding decision functions<br />
wx , + b= 0, where w∈H,<br />
b∈° (5.33)<br />
© A.G.Billard 2004 – Last Update March 2011