18.01.2014 Views

CaputORexercises3.pdf

CaputORexercises3.pdf

CaputORexercises3.pdf

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Answers to some of the exercises.<br />

Chapters 3,4,5,6.<br />

Ex. 3.1 An upper bound on the optimal value is ∑ k+1<br />

i=1 v i. The value of the solution given<br />

by the algorithm is<br />

( )<br />

k∑<br />

max{v i ∗, v i } 1 k∑<br />

v i ∗ + v i 1 ∑k+1<br />

v i 1 2<br />

2 2 Opt.<br />

i=1<br />

i=1<br />

i=1<br />

Ex. 3.2 The running time for Knapsack in Section 3.1 is O(n 3 /ɛ). We improve this here<br />

to O(n 2 /ɛ). Let L Opt U. In Section 3.1 we had the lower bound L = M = max i v i<br />

and the upper bound U = V = ∑ i v i nL. From Exercise 3.1 we see that we can take<br />

L = max{v i ∗, ∑ k<br />

i=1 v i} (where k and i ∗ are as defined in exercise 3.1), and U = 2L. That<br />

means, instead of U ≤ nL we may use U = 2L and that leads to the improved running time.<br />

Let us follow the analysis of Section 3.1 but now with a general lower bound L and upper<br />

bound U. Let<br />

v ′ i = ⌊v i /µ⌋ , where µ = ɛL n .<br />

As before, let S be the set of items found by the D.P. after rounding and let O be the set of<br />

items in the optimal solution for the non-rounded instance. Then in the same way we get<br />

value(S) = ∑ v i ∑ µv i ′ ∑ µv i ′ ∑ i − µ) = Opt − |O|µ<br />

i∈S i∈S i∈O i∈O(v<br />

Opt − nµ = Opt − ɛL Opt − ɛOpt = (1 − ɛ)Opt.<br />

So for the approximation factor it works out fine to use any lower bound L instead of the<br />

specific bound M of Section 3.1.<br />

Now we analyze the running time. Let Opt ′ be the optimal value for the rounded instance.<br />

Opt ′ = ∑ i∈S<br />

v ′ i ∑ i∈S<br />

v i /µ ∑ i∈O<br />

v i /µ = Opt<br />

µ = nOpt<br />

ɛL<br />

nU<br />

ɛL .<br />

The running time of the D.P is O(nOpt ′ ). From exercise 3.1 we see that we can take U = 2L.<br />

O(nOpt ′ ) = O(n 2 U 2L<br />

) = O(n2<br />

ɛL ɛL ) = O(n2 /ɛ).<br />

1


Ex.3.3 This execise consist of three parts:<br />

(A) Prove that there always exists an optimal schedule in which (i) all on-time jobs complete<br />

before all late jobs, and (ii) the on-time jobs complete in an earliest due date (EDD) order.<br />

(B) Give a Dynamic Program.<br />

(C) Give a (fully) polynomial time approximation scheme (PTAS).<br />

Part (B) and (C) are almost identical to the Knapsack problem of Section 3.1.<br />

(A) Consider an arbitrary schedule σ and let S be the set of jobs that complete before their<br />

due date. Then these jobs will still complete before their due date if we place all jobs that<br />

are not on-time after the the jobs from S, while keeping the same order for S. This proves<br />

(i). So assume from now that the jobs from S are first in σ. Next, we place the jobs from S<br />

in EDD order and show that each job from S remains on-time. One way to put the jobs from<br />

S in σ in EDD order is to swap two adjacent jobs from S if they are not in EDD order. Keep<br />

doing this until the jobs from S are in EDD order. We need to show that for every swap,<br />

the two swapped jobs remain on time. Say that job j is followed by job k and d j > d k . Let<br />

C j , C k be the completion times before the swap and C j ′, C′ k<br />

be the completion times after the<br />

swap. Both jobs remain on-time since<br />

C ′ k < C k ≤ d k and C ′ j = C k ≤ d k < d j .<br />

(B) The next part of the question is to give a dynamic program. The problem appears similar<br />

to the knapsack problem of Section 3.1. We need to find a subset S of the jobs to be placed<br />

first in the schedule such that they are all on time and maximize the weight of the selected<br />

jobs. Let us follow the same approach as for the knapsack problem. In fact the analysis below<br />

is almost identical to that of the knapsack problem and we merely changed ‘item’ by ‘job’<br />

and v j by w j .<br />

Label the jobs such that d 1 ≤ d 2 ≤ · · · ≤ d n . Assume that p j ≤ d j for any job j since<br />

otherwise the job can never be on-time and it can be removed from the instance. Define:<br />

A j : the set of all pairs (t, w) such that there is a subset of jobs in {1, . . . , j} with value exactly<br />

w and size exactly t and such that all these jobs can be scheduled on-time.<br />

The pair that we are looking for is (t, w) ∈ A n with maximum value w.<br />

A 1 = {(0, 0), (p 1 , w 1 )}. Now, A j can be computed from A j−1 as follows.<br />

By definition<br />

For j = 2 to n do:<br />

(1) A j ← A j−1<br />

(2) For each (t, w) ∈ A j−1 do: If t + p j ≤ d j then add (t + p j , w + w j ) to<br />

A j .<br />

In the first step, all pairs from A j−1 are added to A j . This corresponds with not choosing<br />

job j. In the second step, we add job j if it can be placed at the end such that it completes<br />

before its due date.<br />

The size of each set is bounded as |A j | ≤ P · W for all j, where P = ∑ j p j and W − ∑ j w j.<br />

Hence, the total running time of this dynamic program is O(nP W ).<br />

We can improve the running time by adding an extra step (3) in which dominated pairs are<br />

removed. A pair (t, w) dominates another pair (t ′ , w ′ ) if t ≤ t ′ and w ≥ w ′ . By removing<br />

each pair that is dominated by another pair, the number of pairs in A j is at most min{P, W }.<br />

2


Now, step (1) and (2) can be done in O(min{P, W }) time. Also, the removal of dominated<br />

pairs (as an extra step (3) ) can be done in this time. We see that the total improved running<br />

time is O(n min{P, W }), which is even better than O(nW ) time.<br />

(C) The last part of the question is to turn this into a (fully) polynomial time approximation<br />

scheme. (‘Fully’ means that the running time in polynomial in 1/ɛ.) Again we copy form the<br />

knapsack problem. Let<br />

M = max w i , and µ = ɛM<br />

i<br />

n , and w′ i = ⌊w i /µ⌋.<br />

The maximum value w ′ i is ⌊M/µ⌋ = ⌊n/ɛ⌋. In general, w′ i ∈ {0, 1, . . . , ⌊n/ɛ⌋}. Let W ′ =<br />

∑<br />

i w′ i . Then W ′ = O(n 2 /ɛ). Now apply the dynamic programming to the rounded instance<br />

and take as solution the set S of jobs returned by the DP. The DP runs in time O(nW ′ ) =<br />

O(n 3 /ɛ).<br />

The error that we make by this rounding is at most µ for each job. This gives a total error<br />

of at most nµ = ɛM ≤ ɛOpt.<br />

We can make this more precise in the same way as was done for the knapsack). Let S be the<br />

set of jobs found by the DP and let O be an optimal set of jobs for the original (unrounded)<br />

instance. For any job i we have<br />

The value of the final solution is<br />

∑<br />

w i ≥ µw ′ i > µ(w i /µ − 1)) = w i − µ.<br />

w i ≥ ∑<br />

i∈S i∈S<br />

∑<br />

≥ (1)<br />

i∈O<br />

µw ′ i<br />

µw ′ i<br />

≥ ∑ i∈O(w i − µ)<br />

= Opt − |O|µ<br />

≥ Opt − nµ<br />

= Opt − ɛM<br />

≥ (2)<br />

Opt − ɛOpt = (1 − ɛ)Opt.<br />

(1): S is optimal for the rounded instance with value w ′ i .<br />

(2): Opt ≥ M since taking the job with largest value is a feasible solution.<br />

Ex. 4.1 (This is a special form of the Integer Multicommodity flow problem of Section 5.11.)<br />

One approach is rounding an LP-solution. Let P i be the set of the two paths for call i. Let<br />

P = ∪ i P i .<br />

(ILP) min Z<br />

s.t.<br />

∑<br />

x P Z, for all edges e,<br />

P :e∈P<br />

∑<br />

x P = 1<br />

P ∈P i<br />

for all calls i,<br />

x P ∈ {0, 1} for all P ∈ P.<br />

3


The LP-relaxation is obtained by replacing the last constraint by:<br />

x P 0 for all P ∈ P.<br />

The algorithm first solves the LP and then rounds each variable. Let ˆx P = 1 if x ∗ P > 1/2<br />

and let ˆx P = 0 if x ∗ P<br />

< 1/2. If the two paths for some call both have value exactly 1/2 then<br />

choose P arbitrarily among the two and set ˆx P = 1 and set it to zero for the other path.<br />

Now, take P in the solution if and only if ˆx P = 1. The solution is feasible since each call is<br />

assigned exactly one path. Moreover, the value is<br />

∑<br />

∑<br />

max ˆx P ≤ 2 max x ∗<br />

e<br />

e<br />

P = 2ZLP ∗ ≤ 2Opt.<br />

P :e∈P<br />

P :e∈P<br />

bjective value of the ILP is at most twice that of the optimal value of the LP-relaxation.<br />

(EXTRA: Note that the following local search algorithm does NOT give a 2-approximation:<br />

Start with any solution. If there is some call i for which changing its path reduces the<br />

maximum load, then change its path. Repeat this as long as possible. (To prove that this is<br />

not a 2-approximation, you only need to give one example where the algorithm fails.)<br />

Also, the following greedy algorithm does NOT give a 2-approximation: Assign paths to calls<br />

one by one. Take the first path arbitrarily. For each new path choose that direction that<br />

minimize the maximum load.)<br />

NB The following algorithm is also a 2-approximation: Choose the shortest path for each call.<br />

Proof sketch: Let e 1 be the most loaded edge and let L 1 be its load. Let e 2 be the opposite<br />

edge on the circle. Let L ′ 1 and L′ 2 be the load on these edges in the optimal solution. We<br />

must have L ′ 1 + L′ 2 ≥ L 1.<br />

Ex. 4.2 The proof is done by a simple swapping argument. Consider a solution in which jobs<br />

are placed in order 1, 2, . . . , n and assume that w i /p i < w j /p j for some pair i, j = i + 1 with<br />

1 i n − 1. We show that swapping the jobs i and j reduces the total weighted completion<br />

time. Note that this swapping only affects the completion time of i and j. The completion<br />

time of i increases by p j and the completion time of j increases by −p i . The total increase in<br />

the objective is w i p j − w j p i < 0 since w i /p i < w j /p j .<br />

Ex. 4.3 Denote [n] = {1, 2 . . . , n} and for any set S ⊆ [n] let p(S) = ∑ j∈S p j. We can make<br />

a similar LP-relaxation as in Section 4.2.<br />

(LP) min<br />

∑ n<br />

j=1 w jC j<br />

s.t. C i C j − p j for every pair i ≺ j<br />

∑<br />

p j C j 1 2 p(S)2 for all S ⊆ [n]<br />

j∈S<br />

Let Cj<br />

∗ (j = 1 . . . , n) denote an optimal LP-solution. For the ease of analysis, let us relabel<br />

the jobs such that C1 ∗ C∗ 2 ≤ · · · C∗ n. Now we schedule jobs in order 1, 2, . . . , n. This is a<br />

feasible schedule since if i ≺ j then the first constraint ensures that Ci ∗ < C∗ j . In other words,<br />

none of the precedence constraints is violated if we schedule the jobs in this order.<br />

Let C j be the completion time of job j in our schedule. Then<br />

C j = p([j]). (1)<br />

4


Below we prove, in exactly the same way as in the proof of Theorem 4.4 (book), that Cj ∗ <br />

1<br />

2p([j]). Combining this with (1) we see that the total weighted completion time of our<br />

schedule is<br />

n∑<br />

n∑<br />

n∑<br />

w j C j = w j p[j] (1) 2 w j Cj ∗ 2Opt.<br />

j=1<br />

j=1<br />

j=1<br />

(1): Let S = [j]. From the LP-constraint it follows that<br />

∑<br />

Cj ∗ p(S) = Cj<br />

∗ p k ∑ p k Ck ∗ 1 2 p(S)2 ⇒ Cj ∗ 1 2 p(S) = 1 2 p([j]).<br />

k∈S k∈S<br />

Ex. 4.4 and 4.5 : Bin packing was not done.<br />

Ex. 4.6 (a) Let x be a solution to the LP with value Z ∗ . Say that an edge (i, j) is fractional<br />

if 0 < x ij < 1. Consider the graph G x defined as G restricted to the fractional edges. If x<br />

is fractional then G x has at least one edge (by definition of a fractional solution). We show<br />

that we can find another solution y with value equal to Z ∗ but with strictly less fractional<br />

edges. Then, by repeating the argument we end up with a 0, 1-solution with value Z ∗ .<br />

First assume that G x has a cycle C. Since the original graph G is bipartite, C must be even.<br />

Pick any edge e ∈ C and increase its x-value by epsilon. Next, decrease and increase the<br />

x-values on C alternatingly by a small ɛ. For small enough ɛ the solution stays feasible. If the<br />

cost of the solution increases by these changes then make the changes the other way around,<br />

i.e., decrease the x-value of e and continue from there. We can choose ɛ such that the solution<br />

stays feasible and has no larger cost and at least one fractional value becomes 0 or 1.<br />

Now assume that G x has no cycles. Then, it has a path P for which both endpoints have<br />

degree 1. Note that the endpoints msut be in B since, by the first LP-constraint, each point<br />

in A has degree at least 2 in G x . Let ɛ > 0 be small. On P , we can alternatingly reduce and<br />

increase the x-value by ɛ such that such that all constraints remain satisfied, the cost does<br />

not increase, and at least one edge gets an x value either 0 or 1. (Since the endpoints are in<br />

B, all constraints remain satisfied for small enough ɛ.)<br />

(b) This follows directly from the analysis in (a). Consider G x and let C be an even cycle<br />

in G x . (The case that G x has a path P with both endpoint of degree 1 works the same.)<br />

Define y by alternatingly increasing and decreasing x-values on C by ɛ > 0. Define z the<br />

same but now the +ɛ and −ɛ are switched. Then (for small enough ɛ) y and z are feasible<br />

and x = 0.5y + 0.5z.<br />

Ex. 5.1 Assign each vertex uniformly at random to one of the sets V 1 , . . . , V k . We need<br />

to show that the expected weight of the corresponding k-cut is at least (k − 1)/k times the<br />

optimal value. As an upper bound for Opt we take Opt ∑ (i,j)∈E w ij. For any edge<br />

e = (i, j), the probability that e appears in the cut is exactly (k − 1)/k. (Assume that i is<br />

assigned first, then the edge appears in the cut only if j is assigned to another set as i.) Let<br />

W be the total weight of the cut. By linearity of expectation<br />

E[W] =<br />

∑<br />

(i,j)∈E<br />

k − 1<br />

k w ij = k − 1<br />

k<br />

∑<br />

(i,j)∈E<br />

w ij k − 1<br />

k<br />

Opt.<br />

5


Ex. 5.2 (a) Note that the unweighted max cut problem is taken here but the same applies<br />

to the weighted version. In fact, the proof is easier to write down in the weighted version.<br />

Remember that in the weighted version we may assume that the graph is complete (since<br />

missing edges simply get weight 0). Let n be the number of vertices and let W G be the total<br />

weight of G.<br />

n∑ ∑k−1<br />

W G = w kj .<br />

1<br />

2<br />

k=1 j=1<br />

In<br />

∑<br />

the k-th iteration (when vertex k is placed), the weight that is added to the cut is at least<br />

k−1<br />

j=1 w kj. Hence, the total weight of the cut is at least<br />

n∑<br />

k=1<br />

1 ∑k−1<br />

w kj = 1 2 2 W G 1 2 Opt.<br />

j=1<br />

(b) See the notes for Section 5.3. Let S k−1 denote the assignement of the first k − 1 vertices.<br />

The derandomized algorithm assigns k to U if<br />

E[Z|S k−1 , v k ∈ U] E[Z|S k−1 , v k ∈ W ]<br />

and assigns it to W otherwise. This corresponds exactly with choosing that side for which<br />

the weight of the edges added to the cut is maximized.<br />

Ex. 5.3 Same algorithm as for the undirected problem. Assign each vertex uniformly at<br />

random to either U or W . For each directed edge (i, j) we have that the probability that it<br />

appears in the cut is exactly Pr(i ∈ U and j ∈ V ) = 1 2 · 1<br />

2 = 1 4<br />

. Let W be the total weight of<br />

the cut. Then<br />

E[W] =<br />

∑ 1<br />

4 w ij ≥ 1 4 Opt.<br />

(i,j)∈E<br />

Ex. 5.4 Let (y, z) be an optimal solution to the LP on page 107 (page ?? in pdf) and let Z LP<br />

be its value. When we apply randomized rounding as suggested we sxee that the probability<br />

that an arbitrary clause C j is satisfied is bounded as follows. Remember that P j is the set of<br />

6


(indices of) variables that appear positively and N j is the set of negative variables in C j .<br />

Pr( C j not sat.) = ∏<br />

(1 − ( y i<br />

2 + 1 4 )) ∏<br />

( y i<br />

2 + 1 4 )<br />

i∈P j i∈N j<br />

= ∏<br />

( 3 4 − y i<br />

2 ) ∏<br />

( y i<br />

2 + 1 4 )<br />

i∈P j i∈N j<br />

⎡ ⎛<br />

⎞⎤<br />

≤ ⎣ 1 ⎝ ∑ ( 3 l j 4 − y i<br />

2 ) + ∑<br />

l j<br />

( y i<br />

2 + 1 4 ) ⎠⎦<br />

i∈P j i∈N j<br />

⎡ ⎛ ⎛<br />

⎞⎞⎤<br />

= ⎣ 1 ⎝ 3<br />

l j 4 l j − 1 ⎝ ∑ y i + ∑<br />

(1 − y i ) ⎠⎠⎦<br />

2<br />

i∈P j i∈N j<br />

⎡ ⎛<br />

⎞⎤<br />

= ⎣ 3 4 − 1 ⎝ ∑ y i + ∑<br />

l j<br />

(1 − y i ) ⎠⎦<br />

2l j<br />

i∈P j i∈N j<br />

[ 3<br />

≤<br />

4 − z ] lj<br />

j<br />

2l j<br />

l j<br />

For the first inequality above we used Fact 5.8 and for the second inequality we used the<br />

LP-constraint. We get that<br />

[ 3<br />

Pr( C j is sat.) ≥ 1 −<br />

4 − z ] lj<br />

j<br />

.<br />

2l j<br />

Denote the righthand side by f(z j ). This function is concave for l j 1. (We omit the proof<br />

for this.) Now we use Fact 5.9. Let<br />

( ) 3<br />

lj<br />

( ) 3<br />

lj<br />

( 3<br />

a = f(0) = 1 − and b = f(1) − f(0) = −<br />

4<br />

4 4 − 1 ) lj<br />

.<br />

2l j<br />

( ) 3<br />

lj<br />

( ) 3<br />

lj<br />

( 3<br />

Pr( C j is sat.) ≥ f(z j ) ≥ 1 − + z j −<br />

4 4 4 − 1 ) lj<br />

z j .<br />

2l j<br />

We use that 1 − ( )<br />

3 lj<br />

4<br />

≥ (1 − ( 3 lj<br />

4)<br />

)z j . Then, the inequality reduces to<br />

( 3<br />

Pr( C j is sat.) ≥ z j −<br />

4 − 1 ) lj<br />

z j .<br />

2l j<br />

For l j = 1 and l j = 2 we have<br />

Pr( C j is sat.) ≥ z j − 1 4 z j = 3 4 z j.<br />

7


For l j ≥ 3 we precede as in Section 5.5. and use that (1 − 1/k) k < 1/e for all k 1.<br />

( 3<br />

4 − 1<br />

2l j<br />

) lj<br />

=<br />

Hence, also for l j 3 we have<br />

=<br />

<<br />

( 3<br />

lj<br />

(<br />

1 −<br />

4) 2 ) lj<br />

3l j<br />

( 3<br />

lj<br />

( )<br />

1 (3/2)lj (2/3)<br />

1 −<br />

4)<br />

(3/2)l j<br />

( ) 3<br />

lj<br />

( ) 1 (2/3)<br />

4 e<br />

< 1 4 , for l j ≥ 3.<br />

Pr( C j is sat.) > z j − 1 4 z j = 3 4 z j.<br />

Let W be the total weight of the satisfied clauses. Then<br />

E[W] ≥<br />

m∑<br />

j=1<br />

3<br />

4 w jz j = 3 4<br />

m∑<br />

w j z j = 3 4 Z LP 3 4 Opt.<br />

j=1<br />

Ex.5.5 –<br />

Ex.5.6 (a) It is enough to show that for any optimal solution of the max directed cut<br />

problem there is a feasible solution for the given mixed ILP with at least the same value and,<br />

vice versa, for any optimal solution of the mixed ILP there is a feasible solution for the max<br />

directed cut problem with at least the same value.<br />

Let U ⊂ V be a feasible solution for the max directed cut problem. Now choose x i = 1 if<br />

i ∈ U and x i = 0 otherwise. Choose z ij = 1 if i ∈ U and j ∈ W and choose z ij = 0 otherwise.<br />

Note that the value of both the solutions (max cut and ILP) is the same.<br />

For the other direction, assume that x i , z ij is optimal for the given mixed ILP. Now assign i<br />

to U if x i = 1. De value of the solution remains the same.<br />

b.<br />

Pr((i, j) in cut) = Pr(i ∈ U and j ∈ W ) = Pr(i ∈ U)Pr(j ∈ W )<br />

= (1/4 + x i /2)(1 − (1/4 + x j /2))<br />

= (1/4 + x i /2)(1/4 + (1 − x j )/2)<br />

≥ (1/4 + z ij /2)(1/4 + z ij /2)<br />

= (1/4 − z ij /2) 2 + z ij /2<br />

≥ z ij /2<br />

Let Opt be the maximum weight of a cut and hence the optimal value of the given mixed ILP.<br />

Let Opt LP be the optimal value of the LP-relaxation that is obtained by replacing x i ∈ {0, 1}<br />

by 0 x i 1. Let W be the total weight of the cut obtained by the proposed algorithm.<br />

Then,<br />

E[W] =<br />

∑<br />

Pr((i, j) in cut) · w ij ≥<br />

∑<br />

w ij z ij /2 Opt LP /2 Opt/2.<br />

(i,j)∈E<br />

8<br />

(i,j)∈E


Ex. 5.8 Note that there are only positive variables: N j = ∅ for all j.<br />

(IP) max<br />

s.t.<br />

∑<br />

Z = m w j z j + ∑ n<br />

i=1 v i(1 − y i )<br />

∑<br />

j=1<br />

y i z j<br />

i∈P j<br />

j = 1, . . . , m<br />

z j ∈ {0, 1} (Relax.:0 z j 1) j = 1, . . . , m<br />

y i ∈ {0, 1} (Relax.:0 y i 1) i = 1, . . . , n.<br />

Let (y, z) be an optimal solution to the LP-relaxation. Let W be the weight of the satisfied<br />

clauses and V be the weight of the false variables after the rounding.<br />

Pr(x i is false ) = 1 − (1 − λ + λy i ) = λ(1 − y i )<br />

⇒<br />

E[V] = λ<br />

n∑<br />

v i (1 − y i ). (2)<br />

i=1<br />

So the expected value of the false variables is at least λ times the value in the optimal LPsolution.<br />

Next we show a bound for the first part of the objective, the weight of the satisfied<br />

clauses. Then, the approximation factor will be the minimum of the two bounds. We shall<br />

choose λ such that this minimum is maximized. Assume that clause C j has k literals. Then,<br />

Pr( C j not sat.) = ∏<br />

≤<br />

=<br />

λ(1 − y i )<br />

i∈P j<br />

⎛<br />

⎞k<br />

⎝ 1 ∑<br />

λ(1 − y i ) ⎠<br />

k<br />

i∈P j<br />

⎛ ⎛ ⎞⎞<br />

⎝ λ ⎝k − ∑ y i<br />

⎠⎠<br />

k<br />

i∈P j<br />

k<br />

≤<br />

( λ<br />

k (k − z j)) k<br />

(<br />

= λ k 1 − z )<br />

j k<br />

.<br />

k<br />

The first inequality follows from Fact 5.8 and for the second we used the LP-constraint.<br />

(<br />

Pr( C j is sat.) ≥ 1 − λ k 1 − z )<br />

j k<br />

.<br />

k<br />

The function of z j on the righthand side is concave on [0, 1] for λ > 0 and k 1. (Follows<br />

directly from the concavity of 1 − (1 − z j /k) k which was shown on page 108.) For z j = 0 its<br />

value is 1 − λ k > 0. Therefore, for z j ∈ [0, 1] we have<br />

(<br />

Pr( C j is sat.) <br />

(1 − λ k 1 − 1 ) ) k<br />

z j .<br />

k<br />

9


Figure 1: Example. Graph for (¯x 1 ∨ x 2 ), (¯x 2 ∨ x 3 ), (x 1 ∨ ¯x 3 ), (x 2 ∨ x 3 )<br />

For k = 1 the above inequality implies Pr( C j is sat.) z j . For k = 2 we get<br />

Pr( C j is sat.) ≥ ( 1 − λ 2 /4 ) z j .<br />

Looking at (2), the approximation factor is min{λ, 1 − λ 2 /4} if each clause has at most<br />

two literals. Let us first maximize this minimum and then check that things works out<br />

well for k 3. The minimum is maximized for λ ∗ = 2 √ 2 − 2. It is easy to check that<br />

Pr( C j is sat.) λ ∗ z j if C j has k = 3 or k = 4 variables. For k 5 we use the general bound<br />

(1 − 1/k) k < 1/e and get<br />

Pr( C j is sat.) <br />

(1 − (λ∗ ) k )<br />

z j > λ ∗ z j , (for λ ∗ = 2 √ 2 − 2).<br />

e<br />

The expected weight of the solution is<br />

Ex. 6.1, 6.2<br />

E[V] + E[W] ≥ λ ∗ Z ∗ LP ≥ λ ∗ Opt.<br />

Ex. 6.3 Hint: Construct the following directed graph G = (V, E). For every variable x i there<br />

are two vertices: v i and v i ′ in V . For every clause there are two arcs in A defined follows:<br />

For every (x i ∨ x j ) there is an arc (v i ′, v j) and an arc (v j ′ , v i).<br />

For every (¯x i ∨ ¯x j ) there is an arc (v i , v j ′ ) and an arc (v j, v i ′).<br />

For every (x i ∨ ¯x j ) there is an arc (v i ′, v′ j ) and an arc (v j, v i ).<br />

Prove that the formula is satifyable if and only if there is no i for which there is a directed<br />

pathin G from v i to v ′ i and a directed path from v′ i to v i.<br />

Ex. 6.4 (a) Start in any vertex v and give it color 1. Then, the neighbors must get color 2.<br />

Their neighbors have color 1 again. Repeat this until all points are colored (in which case we<br />

found a 2-coloring) or when a vertex is assigned both colors (in which case the graph is not<br />

2-colorable).<br />

(b) Color the vertices in any order. Since each vertex v has at most ∆ neighbors, there is<br />

always a color in {1, 2, . . . , ∆ + 1} which is not used by any of the neighbors of v.<br />

10


References<br />

11

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!