MACHINE LEARNING TECHNIQUES - LASA

MACHINE LEARNING TECHNIQUES - LASA MACHINE LEARNING TECHNIQUES - LASA

01.11.2014 Views

94 quadratic programming problem, since the objective function is itself convex, and those points which satisfy the constraints also form a convex set (any linear constraint defines a convex set, and a set of N simultaneous linear constraints defines the intersection of N convex sets, which is also a convex set). This means that we can equivalently solve the following dual problem: maximize L , subject to the constraints that the gradient of L with respect to w and b vanish, P α ≥ . and subject also to the constraints that the 0 i Requesting that the gradient of L with respect to P w and b vanish P ∂ i i P j i , j=1,....N j w L = w −∑ ∂ α y x (5.39) j i gives the conditions: ∂ ∂b L P ∑ i (5.40) i i =− α y = 0 M w = ∑ α (5.41) M i= 1 i= 1 i i i yx i ∑ α i y = 0. (5.42) Since these are equality constraints in the dual formulation, we can substitute them into (5.38) to give: 1 j L α = α y − αα y y x , x . ( ) ∑ ∑ (5.43) i i j i D i i j i 2 i, j Note that we have now given the Lagrangian different labels (P for primal, D for dual) to emphasize that the two formulations are different: L and L arise from the same objective function but with diferent constraints; and the solution is found either by minimizing maximizing L . Note also that if we formulate the problem with b = 0 D P D L or by P , which amounts to requiring that all hyper-planes contain the origin (this is a mild restriction for high dimensional spaces, since it amounts to reducing the number of degrees of freedom by one), support vector training (for the separable, linear case) then amounts to maximizing L with respect to the α , subject to constraints (5.42) and positivity of theα , with solution given by (5.41). Notice that there is a Lagrange multiplier α for every training point. In the solution those points for which i α i > 0 are called support vectors and lie on one of the hyperplanes H1, 2 α = , and lie either on 1 H , or on that side of 1 points have i 0 inequality in (5.37) is satisfied. H or 2 i D H . All other training H or H such that the strict 2 i For these machines, the support vectors are the critical elements of the training set. They lie closest to the decision boundary; if all other training points were removed (or moved around, but so as not to cross H or 1 H ), and training was repeated, the same separating hyperplane would 2 be found. © A.G.Billard 2004 – Last Update March 2011

95 Figure 5-6: A binary classification toy problem: separate dark balls from white balls. The optimal hyperplane is shown as the solid line. The problem being separable, there exists a weight vector w and an offset b i i such that y ( w x b) + > ( i 1,..., m) , 0 i hyperplane satisfy wx , b 1 i i satisfying y ( w x b) = : Rescaling w and b such that the points closest to the + = , we obtain a canonical form ( , ) wb of the hyperplane, , + ≥ 1. Note that, in this case, the margin (the distance to the closest point to the 1 2 hyperplane) equals1/ w . This can be seen by considering two points x , x on opposite sides of the margin, that is, 1 w/ 2 wx , + b= 1, wx , + b=− 1, and projecting them onto the hyperplane normal vector w . The support vectors are the points on the margin (encircled in the drawing). The generic minimization problem when one does not assume that the hyperplane goes through the origin (i.e. b ≠ 0 ) can be found by solving the so-called KKT (Karush-Kuhn-Tucker) conditions. These state that, at each point of the solution, the product between the dual variables and constraints has to vanish, i.e.: i ( ) i y w, x + b −1≥ 0, i= 1,.., M (5.44) αi ≥0, ∀ i= 1,... M (5.45) i i ( ( ) ) α , 1 0, 1,... i y w x + b − = ∀ i= M (5.46) The KKT conditions are necessary for wbα , , to be a solution. With all of the above, we now can determine the variables of the SVM problem. w is explicitly determined by (5.41). The threshold b is determined by solving the KKT "complementarity" condition given by (5.46): by choosing any i for which αi ≠ 0 , one can compute b (note that it is numerically safer to take the mean value of b resulting from all such equations). © A.G.Billard 2004 – Last Update March 2011

95<br />

Figure 5-6: A binary classification toy problem: separate dark balls from white balls. The optimal hyperplane<br />

is shown as the solid line. The problem being separable, there exists a weight vector w and an offset b<br />

i<br />

i<br />

such that y ( w x b)<br />

+ > ( i 1,..., m)<br />

, 0<br />

i<br />

hyperplane satisfy wx , b 1<br />

i<br />

i<br />

satisfying y ( w x b)<br />

= : Rescaling w and b such that the points closest to the<br />

+ = , we obtain a canonical form ( , )<br />

wb of the hyperplane,<br />

, + ≥ 1. Note that, in this case, the margin (the distance to the closest point to the<br />

1 2<br />

hyperplane) equals1/ w . This can be seen by considering two points x , x on opposite sides of the<br />

margin, that is,<br />

1<br />

w/<br />

2<br />

wx , + b= 1, wx , + b=− 1, and projecting them onto the hyperplane normal vector<br />

w . The support vectors are the points on the margin (encircled in the drawing).<br />

The generic minimization problem when one does not assume that the hyperplane goes through<br />

the origin (i.e. b ≠ 0 ) can be found by solving the so-called KKT (Karush-Kuhn-Tucker)<br />

conditions. These state that, at each point of the solution, the product between the dual variables<br />

and constraints has to vanish, i.e.:<br />

i<br />

( )<br />

i<br />

y w, x + b −1≥ 0, i= 1,.., M<br />

(5.44)<br />

αi ≥0, ∀ i= 1,... M<br />

(5.45)<br />

i<br />

i<br />

( ( ) )<br />

α , 1 0, 1,...<br />

i<br />

y w x + b − = ∀ i= M<br />

(5.46)<br />

The KKT conditions are necessary for wbα , , to be a solution.<br />

With all of the above, we now can determine the variables of the SVM problem. w is explicitly<br />

determined by (5.41). The threshold b is determined by solving the KKT "complementarity"<br />

condition given by (5.46): by choosing any i for which αi<br />

≠ 0 , one can compute b (note that it is<br />

numerically safer to take the mean value of b resulting from all such equations).<br />

© A.G.Billard 2004 – Last Update March 2011

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!