14.02.2013 Views

Gruber P. Convex and Discrete Geometry

Gruber P. Convex and Discrete Geometry

Gruber P. Convex and Discrete Geometry

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

20 Linear Optimization 339<br />

on noting that (λ1,...,λk, 0,...,0) ∈ Q = {y : y ≥ o, yA = c} by (3). In<br />

particular, Q �= ∅. (2) then shows that the infimum is also attained <strong>and</strong> that the<br />

maximum is less than or equal to the minimum. Thus (1) holds in case where P �= ∅<br />

<strong>and</strong> the supremum is attained.<br />

Assume, finally, that Q �= ∅<strong>and</strong> that ε = inf{yb : y ≥ o, yA = c} is attained at<br />

q ∈ Q ={y : yA = c, −y ≤ o}, say. Then −b is an exterior normal vector of the<br />

support hyperplane {y : yb = ε} of Q at q. Proposition 20.1 then shows that<br />

(4) b = µ1b1 +···+µdbd + ν1e1 +···+νlel with suitable µi ∈ R,νj ≥ 0,<br />

where b1,...,bd are the column vectors of A <strong>and</strong> e1,...,el are st<strong>and</strong>ard unit vectors<br />

of E d such that e j has entry 1 where q has entry 0. In particular,<br />

(5) qej = 0for j = 1,...,l.<br />

(4) implies that<br />

A(µ1,...,µd) T = µ1b1 +···+µdbd = b − ν1e1 −···−νlel ≤ b.<br />

Thus (µ1,...,µd) T ∈ P <strong>and</strong> therefore P �= ∅. This, together with (4) <strong>and</strong> (5),<br />

shows that<br />

min{yb : y ≥ o, yA = c} =ε = qb = µ1qb1 +···+µdqbd<br />

= qA(µ1,...,µd) T = c(µ1,...,µd) T<br />

≤ sup{cx : Ax ≤ b}.<br />

Since P, Q �= ∅, (2) shows that the supremum is also attained <strong>and</strong> that the maximum<br />

is less than or equal to the minimum. Thus (1) holds also in the case where Q �= ∅<br />

<strong>and</strong> the infimum is attained. The proof of the theorem is complete. ⊓⊔<br />

20.2 The Simplex Algorithm<br />

The simplex algorithm of Dantzig [237], or versions of it, are still the common<br />

method for linear optimization problems.<br />

In this section we give a description of the simplex algorithm <strong>and</strong> show that it<br />

leads to a solution. The presentation follows Schrijver [915].<br />

The Idea of the Simplex Algorithm<br />

Consider a linear optimization problem of the form<br />

(1) sup{cx : Ax ≤ b}<br />

where a vertex v0 of the feasible set P ={x : Ax ≤ b} is given. Check whether there<br />

is an edge of P starting at v0 along which the objective function cx increases. If there<br />

is no such edge, v0 is an optimum solution. Otherwise move along one of these edges.<br />

If this edge is a ray, the supremum is infinite. If not, let v1 be the other vertex on this<br />

edge. Repeat this step with v1 instead of v0, etc. This leads to an optimum solution<br />

or shows that the supremum is infinite in finitely many steps.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!