06.08.2013 Views

A subgradient-based branch-and-bound algorithm for the ...

A subgradient-based branch-and-bound algorithm for the ...

A subgradient-based branch-and-bound algorithm for the ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

4.4 Using <strong>the</strong> volume <strong>algorithm</strong> instead of <strong>subgradient</strong> optimization<br />

We finally also compared BB-SG with <strong>the</strong> same <strong>branch</strong>-<strong>and</strong>-<strong>bound</strong> method,<br />

where however instead of <strong>subgradient</strong> optimization <strong>the</strong> volume <strong>algorithm</strong> is used <strong>for</strong><br />

computing lower <strong>bound</strong>s (BB-VA). In contrast to BB-SG, <strong>the</strong> volume <strong>algorithm</strong> estimates<br />

primal solutions by taking an exponentially smoo<strong>the</strong>d average of <strong>the</strong> generated<br />

Lagrangean solutions. The guessed primal solution is also used to determine a direction<br />

into which <strong>the</strong> method searches <strong>for</strong> improved dual solutions, whilst BB-SG simply<br />

makes a step in direction of a <strong>subgradient</strong>. In our experiments, we used a smoothing<br />

parameter µ = 0.1. We also experimented with taking simple arithmetic averages,<br />

but this showed to be far less effective. Barahona <strong>and</strong> Anbil (2000) <strong>and</strong> Barahona<br />

<strong>and</strong> Chudak (2005) suggested to choose µ = max <br />

µmax/10, min{µ ∗ , µmax} <br />

,<br />

where µ ∗ minimizes µs t + (1 − µ)g t , s t <strong>and</strong> g t respectively denotes <strong>the</strong> current<br />

<strong>subgradient</strong> <strong>and</strong> direction, <strong>and</strong> µmax is initially set to 0.1 <strong>and</strong> halved if ZD(λ) is not<br />

increased by 1 % in 100 consecutive iterations. Table 12 compares <strong>the</strong> per<strong>for</strong>mance<br />

of <strong>the</strong> two methods on <strong>the</strong> test problem instances from Klose <strong>and</strong> Görtz (2007).<br />

Table 13 again summarizes <strong>the</strong>se results by additionally taking averages over <strong>the</strong><br />

different ratios r. BB-SG significantly outper<strong>for</strong>med <strong>the</strong> method <strong>based</strong> on <strong>the</strong> volume<br />

<strong>algorithm</strong>. On average, BB-SG was 4 to 5 times faster than BB-VA. The clear<br />

superiority of BB-SG suggests that <strong>the</strong> method will still do better than BB-VA<br />

even if <strong>the</strong> smoothing parameter µ is determined in a more sophisticated way. We<br />

<strong>the</strong>re<strong>for</strong>e also refrained from extending <strong>the</strong> comparison to <strong>the</strong> o<strong>the</strong>r types of test<br />

problems. In our computational experiments, we in particular observed that <strong>the</strong><br />

method <strong>based</strong> on <strong>the</strong> volume <strong>algorithm</strong> showed at times week dual convergence behavior<br />

<strong>and</strong> difficulties to get close enough to <strong>the</strong> optimal Lagrangean dual solution.<br />

In such cases, <strong>the</strong> <strong>branch</strong>-<strong>and</strong>-<strong>bound</strong> method went deep down <strong>the</strong> enumeration tree<br />

<strong>and</strong> enumerated too much nodes. In contrast to BB-SG, <strong>the</strong> method <strong>based</strong> on <strong>the</strong><br />

volume <strong>algorithm</strong> also needs to average <strong>the</strong> solutions x t <strong>and</strong> thus usually required<br />

more computational ef<strong>for</strong>t per node than BB-SG.<br />

5 Conclusions<br />

In this paper we proposed a simple <strong>subgradient</strong>-<strong>based</strong> <strong>branch</strong>-<strong>and</strong>-<strong>bound</strong> <strong>algorithm</strong><br />

<strong>for</strong> solving <strong>the</strong> CFLP exactly. The method’s main idea is to average solutions obtained<br />

<strong>for</strong> <strong>the</strong> Lagrangean subproblem in order to estimate fractional primal solutions<br />

to <strong>the</strong> Lagrangean dual <strong>and</strong> to base <strong>branch</strong>ing decisions on <strong>the</strong> estimated<br />

primal solution. If combined with a best-lower <strong>bound</strong> search strategy, <strong>the</strong> method<br />

consistently showed to significantly outper<strong>for</strong>m o<strong>the</strong>r existing state-of-<strong>the</strong>-art methods<br />

<strong>for</strong> solving <strong>the</strong> CFLP exactly. The method is capable to solve difficult instances<br />

of <strong>the</strong> CFLP with up to 1000 customer <strong>and</strong> 1000 potential facility sites as well as<br />

1500 customer <strong>and</strong> 600 potential facility sites. The method also showed to be ra<strong>the</strong>r<br />

robust, in <strong>the</strong> sense that it worked fine on sets of different test problem instances<br />

showing different problem characteristics. In addition, <strong>the</strong> method has <strong>the</strong> advantage<br />

of being relatively simple <strong>and</strong> relatively easy to implement. We think that<br />

<strong>the</strong> efficiency of <strong>the</strong> method can primarily attributed to <strong>the</strong> following points: (i)<br />

Subgradient optimization works very well <strong>for</strong> getting very close to <strong>the</strong> Lagrangean<br />

17

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!