MACHINE LEARNING TECHNIQUES - LASA
MACHINE LEARNING TECHNIQUES - LASA MACHINE LEARNING TECHNIQUES - LASA
94 quadratic programming problem, since the objective function is itself convex, and those points which satisfy the constraints also form a convex set (any linear constraint defines a convex set, and a set of N simultaneous linear constraints defines the intersection of N convex sets, which is also a convex set). This means that we can equivalently solve the following dual problem: maximize L , subject to the constraints that the gradient of L with respect to w and b vanish, P α ≥ . and subject also to the constraints that the 0 i Requesting that the gradient of L with respect to P w and b vanish P ∂ i i P j i , j=1,....N j w L = w −∑ ∂ α y x (5.39) j i gives the conditions: ∂ ∂b L P ∑ i (5.40) i i =− α y = 0 M w = ∑ α (5.41) M i= 1 i= 1 i i i yx i ∑ α i y = 0. (5.42) Since these are equality constraints in the dual formulation, we can substitute them into (5.38) to give: 1 j L α = α y − αα y y x , x . ( ) ∑ ∑ (5.43) i i j i D i i j i 2 i, j Note that we have now given the Lagrangian different labels (P for primal, D for dual) to emphasize that the two formulations are different: L and L arise from the same objective function but with diferent constraints; and the solution is found either by minimizing maximizing L . Note also that if we formulate the problem with b = 0 D P D L or by P , which amounts to requiring that all hyper-planes contain the origin (this is a mild restriction for high dimensional spaces, since it amounts to reducing the number of degrees of freedom by one), support vector training (for the separable, linear case) then amounts to maximizing L with respect to the α , subject to constraints (5.42) and positivity of theα , with solution given by (5.41). Notice that there is a Lagrange multiplier α for every training point. In the solution those points for which i α i > 0 are called support vectors and lie on one of the hyperplanes H1, 2 α = , and lie either on 1 H , or on that side of 1 points have i 0 inequality in (5.37) is satisfied. H or 2 i D H . All other training H or H such that the strict 2 i For these machines, the support vectors are the critical elements of the training set. They lie closest to the decision boundary; if all other training points were removed (or moved around, but so as not to cross H or 1 H ), and training was repeated, the same separating hyperplane would 2 be found. © A.G.Billard 2004 – Last Update March 2011
95 Figure 5-6: A binary classification toy problem: separate dark balls from white balls. The optimal hyperplane is shown as the solid line. The problem being separable, there exists a weight vector w and an offset b i i such that y ( w x b) + > ( i 1,..., m) , 0 i hyperplane satisfy wx , b 1 i i satisfying y ( w x b) = : Rescaling w and b such that the points closest to the + = , we obtain a canonical form ( , ) wb of the hyperplane, , + ≥ 1. Note that, in this case, the margin (the distance to the closest point to the 1 2 hyperplane) equals1/ w . This can be seen by considering two points x , x on opposite sides of the margin, that is, 1 w/ 2 wx , + b= 1, wx , + b=− 1, and projecting them onto the hyperplane normal vector w . The support vectors are the points on the margin (encircled in the drawing). The generic minimization problem when one does not assume that the hyperplane goes through the origin (i.e. b ≠ 0 ) can be found by solving the so-called KKT (Karush-Kuhn-Tucker) conditions. These state that, at each point of the solution, the product between the dual variables and constraints has to vanish, i.e.: i ( ) i y w, x + b −1≥ 0, i= 1,.., M (5.44) αi ≥0, ∀ i= 1,... M (5.45) i i ( ( ) ) α , 1 0, 1,... i y w x + b − = ∀ i= M (5.46) The KKT conditions are necessary for wbα , , to be a solution. With all of the above, we now can determine the variables of the SVM problem. w is explicitly determined by (5.41). The threshold b is determined by solving the KKT "complementarity" condition given by (5.46): by choosing any i for which αi ≠ 0 , one can compute b (note that it is numerically safer to take the mean value of b resulting from all such equations). © A.G.Billard 2004 – Last Update March 2011
- Page 43 and 44: 43 3.1.1.1 The CURE Clustering Algo
- Page 45 and 46: 45 Disadvantages of hierarchical cl
- Page 47 and 48: 47 Cases where K-means might be vie
- Page 49 and 50: 49 3.1.4 Clustering with Mixtures o
- Page 51 and 52: 51 k ( σ j ) 2 = k ∑ i α = r k
- Page 53 and 54: 53 Theα are the so-called mixing c
- Page 55 and 56: 55 Figure 3-16: Clustering with 3 G
- Page 57 and 58: 57 When the transformation A is lin
- Page 59 and 60: 59 C: X → Y ( ) C x K = arg max
- Page 61 and 62: 61 Figure 3-18: Linear combination
- Page 63 and 64: 63 Figure 3-19: Bayes classificatio
- Page 65 and 66: 65 ⎛⎛ min ⎜⎜ w ⎝⎝ N i=
- Page 67 and 68: 67 T ( yi − xi w) 2 M ⎛⎛ ⎞
- Page 69 and 70: 69 Figure 4-2: Illustration of the
- Page 71 and 72: 71 4.4.2 Multi-Gaussian Case It is
- Page 73 and 74: 73 5 Kernel Methods These lecture n
- Page 75 and 76: 75 The kernel k provides a metric o
- Page 77 and 78: 77 M 1 T v = ∑ x ( x ) v M λ i j
- Page 79 and 80: 79 1 M The solutions to the dual ei
- Page 81 and 82: 81 5.4 Kernel CCA The linear versio
- Page 83 and 84: 83 additional ridge parameter induc
- Page 85 and 86: 85 Figure 5-3: TOP: Marginal (left)
- Page 87 and 88: 87 statistical independence. We def
- Page 89 and 90: 89 J j ( µ 1,...., µ K) = ∑∑
- Page 91 and 92: 91 A simple pattern recognition alg
- Page 93: 93 ( ) ( , ) f x = sign w x + b (5.
- Page 97 and 98: 97 where N is the number of support
- Page 99 and 100: 99 5.8 Support Vector Regression In
- Page 101 and 102: 101 The optimization problem given
- Page 103 and 104: 103 Note that since we never have t
- Page 105 and 106: 105 Figure 5-13: Effect of the kern
- Page 107 and 108: 107 To better understand the effect
- Page 109 and 110: 109 5.9 Gaussian Process Regression
- Page 111 and 112: 111 One can then use the above expr
- Page 113 and 114: 113 5.9.2 Equivalence of Gaussian P
- Page 115 and 116: 115 5.9.3 Curse of dimensionality,
- Page 117 and 118: 117 The weight w determines the slo
- Page 119 and 120: 119 Figure 5-21: Example of success
- Page 121 and 122: 121 • its performance tends to de
- Page 123 and 124: 123 neurons. Furthermore, they lear
- Page 125 and 126: 125 The sigmoid f x ( x) ( ) = tanh
- Page 127 and 128: 127 6.3.2 Information Theory and th
- Page 129 and 130: 129 ( R) ⎛⎛det ⎞⎞ I( x, y)
- Page 131 and 132: 131 y= ∑ w x ) of Because of the
- Page 133 and 134: 133 6.5 Willshaw net David Willshaw
- Page 135 and 136: 135 6.6.1 Weights bounds One of the
- Page 137 and 138: 137 Figure 6-11: The weight vector
- Page 139 and 140: 139 6.6.4 Oja’s one Neuron Model
- Page 141 and 142: 141 If y i and y are highly correla
- Page 143 and 144: 143 Foldiak’s second model allows
95<br />
Figure 5-6: A binary classification toy problem: separate dark balls from white balls. The optimal hyperplane<br />
is shown as the solid line. The problem being separable, there exists a weight vector w and an offset b<br />
i<br />
i<br />
such that y ( w x b)<br />
+ > ( i 1,..., m)<br />
, 0<br />
i<br />
hyperplane satisfy wx , b 1<br />
i<br />
i<br />
satisfying y ( w x b)<br />
= : Rescaling w and b such that the points closest to the<br />
+ = , we obtain a canonical form ( , )<br />
wb of the hyperplane,<br />
, + ≥ 1. Note that, in this case, the margin (the distance to the closest point to the<br />
1 2<br />
hyperplane) equals1/ w . This can be seen by considering two points x , x on opposite sides of the<br />
margin, that is,<br />
1<br />
w/<br />
2<br />
wx , + b= 1, wx , + b=− 1, and projecting them onto the hyperplane normal vector<br />
w . The support vectors are the points on the margin (encircled in the drawing).<br />
The generic minimization problem when one does not assume that the hyperplane goes through<br />
the origin (i.e. b ≠ 0 ) can be found by solving the so-called KKT (Karush-Kuhn-Tucker)<br />
conditions. These state that, at each point of the solution, the product between the dual variables<br />
and constraints has to vanish, i.e.:<br />
i<br />
( )<br />
i<br />
y w, x + b −1≥ 0, i= 1,.., M<br />
(5.44)<br />
αi ≥0, ∀ i= 1,... M<br />
(5.45)<br />
i<br />
i<br />
( ( ) )<br />
α , 1 0, 1,...<br />
i<br />
y w x + b − = ∀ i= M<br />
(5.46)<br />
The KKT conditions are necessary for wbα , , to be a solution.<br />
With all of the above, we now can determine the variables of the SVM problem. w is explicitly<br />
determined by (5.41). The threshold b is determined by solving the KKT "complementarity"<br />
condition given by (5.46): by choosing any i for which αi<br />
≠ 0 , one can compute b (note that it is<br />
numerically safer to take the mean value of b resulting from all such equations).<br />
© A.G.Billard 2004 – Last Update March 2011