24.03.2013 Views

Chapter 1 LINEAR COMPLEMENTARITY PROBLEM, ITS ...

Chapter 1 LINEAR COMPLEMENTARITY PROBLEM, ITS ...

Chapter 1 LINEAR COMPLEMENTARITY PROBLEM, ITS ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

38 <strong>Chapter</strong> 1. Linear Complementarity Problem, Its Geometry, and Applications<br />

for all i 2 J(x) \fk +1:::mg and for any y 6= 0,y 2fy :(rgi(x)) y =0 i 2<br />

J(x)g, yT; @2L(x )<br />

@x2 y>0,<br />

(iii) the initial point x0 is su ciently close to a KKT point for (1.22)<br />

it has been proved (see references [1.44, 1.45]) that the sequence (x r r ) generated by<br />

the algorithm converges superlinearly to (x ) which together satisfy (1.23).<br />

These recursive quadratic programming methods have given outstanding numerical<br />

performance and thereby attracted a lot of attention. However, as pointed out<br />

above, one di culty with this approach is that the quadratic programming problem<br />

(1.26) may be infeasible in some steps, even if the original nonlinear program has an<br />

optimum solution, in addition the modi ed quadratic program (1.33) may have the<br />

optimum solution ~ d =0,in which case the method breaks down. Another di culty is<br />

that constraint gradients need to be computed for each constraint in each step, even for<br />

constraints which are inactive. Yet another di culty is the function f( ) minimized<br />

in the line search routine in each step, which is a non-di erentiable L1-penalty function.<br />

To avoid these and other di culties, the following modi ed sequential quadratic<br />

programming method has been proposed for solving (1.22) by K. Schittkowski [1.50,<br />

1.51].<br />

Choose the initial point x 0 , multiplier vector 0 , B0 = I or some PD symmetric<br />

approximation for @2 L(x 0 0 )<br />

@x2 , 0 2 R 1 , 0 2 R m ( 0 > 0, 0 > 0) and constants ">0,<br />

> 1, 0 < < 1. The choice of " = 10 ;7 , = 0:9, = 100, and suitable positive<br />

values for 0, 0 is reported to work well by K. Schittkowski [1.51]. Evaluate (x 0 ),<br />

gi(x 0 ), rgi(x 0 ), i =1to m and go to stage 1.<br />

General Stage r+1: Let x r , r denote the current solution and Lagrange multiplier<br />

vector. De ne<br />

J1 = f1:::kg[fi : k +1< = i < = m and either gi(x r ) < = " or<br />

J2 = f1:::mgnJ1 :<br />

r<br />

i > 0g<br />

The constraints in (1.22) corresponding to i 2 J1 are treated as the active set of<br />

constraints at this stage, constraints in (1.22) corresponding to i 2 J2 are the current<br />

inactive constraints.<br />

Let Br be the present matrix which is a PD symmetric approximation for @2L(x r r )<br />

@x2 ,<br />

this matrix is updated from step to step using the BFGS quasi-Newton update formula<br />

discussed earlier. The quadratic programming subproblem to be solved at this stage<br />

contains an additional variable, xn+1, to make sureit is feasible. It is the following<br />

; rx 2 n+1<br />

minimize P (d) = 1<br />

2 dT Brd +(r (x r ))d + 1<br />

2<br />

subject to (rgi(x r ))d +(1; xn+1)gi(x r )<br />

=0<br />

><br />

=<br />

i =1to k<br />

0 i 2 J1<br />

(rgi(x<br />

\fk +1:::mg<br />

si ))d + gi(x r ) ><br />

=<br />

0 i 2 J2<br />

0 < = xn+1 < = 1<br />

(1:36)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!