jointly constrained biconvex programming - Convex Optimization

jointly constrained biconvex programming - Convex Optimization jointly constrained biconvex programming - Convex Optimization

convexoptimization.com
from convexoptimization.com More from this publisher
13.07.2015 Views

284FAIZ A. AL-KHYYAL AND JAMES E. FALKand07B2= 6-3V-25412-3-4 93 7-8 87 08 5-750.1-2vThe algorithm found the minimum -45.38 at the point x* = (4.5667,20,3.2,0,0)and y*= (0,0,0.19565,0.086957,0), after exploring five nodes, four of which weredeleted before solving the extra problem called for by the acceleration of convergencetechnique. When the problem was resolved using the objective function(2,4,8, - 1, -3)x + xTy + (-2, - 1, -2, -4,5)y, the algorithm located the minimum- 42.963 at the point x* = (0,5.9821,0,4.375,20) and v* = (0.80645,0,0.45161,0, 0)after solving only one linear program.EXAMPLE 2.Minimize (-1, -2, -3, -4,-5)x + xy + (-1, -2, -3, -4, -5)y(x, y)subject to B y)< b,whereandXI + X2 + X3 - 2x4 + X5 + Y + Y2 + 4y3 + y4 + 3y5 < 200,5Xi 1, 2 yi > 2,i=1i=l 100,0 < xi < 100, 0 < yi < 100,b = (80,57,92,55,76,14,47,51,36,92)T1-31- 1-540-5-4-2730-28- 17-1-765812-86400057300-407-980-9893-79- 12-6Several problems were solved using the same B matrix and b vector. After eachsolution, the cost vector and feasible region bounding constraints were arbitrarilyadjusted in an attempt to produce a more challenging problem. The problem exhibitedabove was the result of this experiment. The algorithm found the minimum - 794.86 atthe point x* = (100, 0, 0, 80.94, 0) and y* = (0, 0,17.828,0,63.523). For this problem,the extra linear program was solved at seven of the thirteen nodes explored.Extensions. With a little extra work the algorithm described in this paper can beused to solve programs having a general bilinear form in the objective function;namely,minimize(x. y)subject to-6-7950-802684f(x, y) = f(x) + x rAy + g(y)(x, y) E S hn-3-90-34-705-98-3091-56-6-81558-7- 1-2-2-5-5-52-7-7-8-59-9-520-7

JOINTLY CONSTRAINED BICONVEX PROGRAMMING285where A is a general p x q matrix, and all functions and sets are as defined earlier. Theextra work involved takes the form of exploiting any structure that A may have andthen making a coordinate change that transforms the bilinear term x TAy into an innerproduct term in the new coordinate variables. These coordinate transformationscorrespond to the introduction of new bases for the range space and domain space ofA. Care must be taken to compute upper and lower bounds on the new variables,preferably as tight as possible. We will illustrate the approach by describing howcertain A matrices could be handled.When A is nonsingular and is also isotone (or antitone), making the transformationz = Ay immediately leads to a canonical form accepted by the algorithm. If A is onlynonsingular, a hyperrectangle bounding the z-vector can be computed with little extrawork. The most general problem arises when A is nonsquare with rank p. For this casewe appeal to matrix factorization techniques that have been extensively used inlarge-scale linear programming procedures. In particular, the A = QRZ decompositionand the singular-value decomposition lead to a suitable transformed problem. Thetheory and numerical techniques involved are discussed in Stewart [19] and Noble [16].These matrix decomposition procedures yield an inner product in the new coordinateswith only p nonzero terms. Thus the added work in decomposing A is exploited by thealgorithm, which now has only p convex envelopes and branching indices to accountfor-a considerable savings when p is small.Another extension of the algorithm is to permit f and g to be separable concavefunctions. That is, the objective function can be expressed asnn+(x, y) = fi(xi) + x Ty + ~ gi(Yi)i= 1 i=lwhere all f. and gi are concave. Convex envelopes of fi and gi over the boundedvariables are easily computed. Moreover, the convex enevelope of

284FAIZ A. AL-KHYYAL AND JAMES E. FALKand07B2= 6-3V-25412-3-4 93 7-8 87 08 5-750.1-2vThe algorithm found the minimum -45.38 at the point x* = (4.5667,20,3.2,0,0)and y*= (0,0,0.19565,0.086957,0), after exploring five nodes, four of which weredeleted before solving the extra problem called for by the acceleration of convergencetechnique. When the problem was resolved using the objective function(2,4,8, - 1, -3)x + xTy + (-2, - 1, -2, -4,5)y, the algorithm located the minimum- 42.963 at the point x* = (0,5.9821,0,4.375,20) and v* = (0.80645,0,0.45161,0, 0)after solving only one linear program.EXAMPLE 2.Minimize (-1, -2, -3, -4,-5)x + xy + (-1, -2, -3, -4, -5)y(x, y)subject to B y)< b,whereandXI + X2 + X3 - 2x4 + X5 + Y + Y2 + 4y3 + y4 + 3y5 < 200,5Xi 1, 2 yi > 2,i=1i=l 100,0 < xi < 100, 0 < yi < 100,b = (80,57,92,55,76,14,47,51,36,92)T1-31- 1-540-5-4-2730-28- 17-1-765812-86400057300-407-980-9893-79- 12-6Several problems were solved using the same B matrix and b vector. After eachsolution, the cost vector and feasible region bounding constraints were arbitrarilyadjusted in an attempt to produce a more challenging problem. The problem exhibitedabove was the result of this experiment. The algorithm found the minimum - 794.86 atthe point x* = (100, 0, 0, 80.94, 0) and y* = (0, 0,17.828,0,63.523). For this problem,the extra linear program was solved at seven of the thirteen nodes explored.Extensions. With a little extra work the algorithm described in this paper can beused to solve programs having a general bilinear form in the objective function;namely,minimize(x. y)subject to-6-7950-802684f(x, y) = f(x) + x rAy + g(y)(x, y) E S hn-3-90-34-705-98-3091-56-6-81558-7- 1-2-2-5-5-52-7-7-8-59-9-520-7

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!