The.Algorithm.Design.Manual.Springer-Verlag.1998
The.Algorithm.Design.Manual.Springer-Verlag.1998 The.Algorithm.Design.Manual.Springer-Verlag.1998
Constrained and Unconstrained Optimization ● How smooth is my function? The main difficulty of global optimization is getting trapped in local optima. Consider the problem of finding the highest point in a mountain range. If there is only one mountain and it is nicely shaped, we can find the top by just walking in whatever direction is up. However, if there are many false summits or other mountains in the area, it is difficult to convince ourselves whether we are really at the highest point. Smoothness is the property that enables us to quickly find the local optimum from a given point. We assume smoothness in seeking the peak of the mountain by walking up. If the height at any given point was a completely random function, there would be no way we could find the optimum height short of sampling every single point. Efficient algorithms for unconstrained global optimization use derivatives and partial derivatives to find local optima, to point out the direction in which moving from the current point does the most to increase or decrease the function. Such derivatives can sometimes be computed analytically, or they can be estimated numerically by taking the difference between values of nearby points. A variety of steepest descent and conjugate gradient methods to find local optima have been developed, similar in many ways to numerical root-finding algorithms. It is a good idea to try out several different methods on any given optimization problem. For this reason, we recommend experimenting with the implementations below before attempting to implement your own method. Clear descriptions of these algorithms are provided in several numerical algorithms books, in particular [PFTV86]. For constrained optimization, finding points that satisfy all the constraints is often the difficult problem. One approach is to use a method for unconstrained optimization, but add a penalty according to how many constraints are violated. Determining the right penalty function is problem-specific, but it often makes sense to vary the penalties as optimization proceeds. At the end, the penalties should be very high to ensure that all constraints are satisfied. Simulated annealing is a fairly robust and simple approach to constrained optimization, particularly when we are optimizing over combinatorial structures (permutations, graphs, subsets) instead of continuous functions. Techniques for simulated annealing are described in Section . Implementations: Several of the Collected Algorithms of the ACM are Fortran codes for unconstrained optimization, most notably Algorithm 566 [MGH81], Algorithm 702 [SF92], and Algorithm 734 [Buc94]. Algorithm 744 [Rab95] does unconstrained optimization in Lisp. They are available from Netlib (see Section ). Also check out the selection at GAMS, the NIST Guide to Available Mathematical Software, at http://gams.nist.gov. NEOS (Network-Enabled Optimization System) provides a unique service, the opportunity to solve your problem on computers and software at Argonne National Laboratory, over the WWW. Linear programming and unconstrained optimization are both supported. This is worth checking out at file:///E|/BOOK/BOOK3/NODE140.HTM (3 of 4) [19/1/2003 1:30:21]
Constrained and Unconstrained Optimization http://www.mcs.anl.gov/home/otc/Server/ when you need a solution instead of a program. General purpose simulated annealing implementations are available and probably are the best place to start experimenting with this technique for constrained optimization. Particularly popular is Adaptive Simulated Annealing (ASA), written in C and retrievable via anonymous ftp from ftp.alumni.caltech.edu [131.215.139.234] in the /pub/ingber directory. To get on the ASA mailing list send e-mail to asarequest@alumni.caltech.edu. Genocop, by Zbigniew Michalewicz [Mic92], is a genetic algorithm-based program for constrained and unconstrained optimization, written in C. I tend to be quite skeptical of genetic algorithms (see Section ), but many people find them irresistible. Genocop is available from ftp://ftp.uncc.edu/coe/evol/ for noncommercial purposes. Notes: Steepest-descent methods for unconstrained optimization are discussed in most books on numerical methods, including [PFTV86, BT92]. Unconstrained optimization is the topic of several books, including [Bre73, Fle80]. Simulated annealing was devised by Kirkpatrick et. al. [KGV83] as a modern variation of the Metropolis algorithm [MRRT53]. Both use Monte Carlo techniques to compute the minimum energy state of a system. Good expositions on simulated annealing include [AK89]. Genetic algorithms were developed and popularized by Holland [Hol75, Hol92]. Expositions on genetic algorithms include [Gol89, Koz92, Mic92]. Tabu search [Glo89a, Glo89b, Glo90] is yet another heuristic search procedure with a devoted following. Related Problems: Linear programming (see page ), satisfiability (see page ). Next: Linear Programming Up: Numerical Problems Previous: Determinants and Permanents Algorithms Mon Jun 2 23:33:50 EDT 1997 file:///E|/BOOK/BOOK3/NODE140.HTM (4 of 4) [19/1/2003 1:30:21]
- Page 341 and 342: The Euclidean Traveling Salesman Ne
- Page 343 and 344: Exercises 1. Prove that the low deg
- Page 345 and 346: Data Structures Next: Dictionaries
- Page 347 and 348: Dictionaries Next: Priority Queues
- Page 349 and 350: Dictionaries use a function that ma
- Page 351 and 352: Dictionaries Implementation-oriente
- Page 353 and 354: Priority Queues ● Besides access
- Page 355 and 356: Priority Queues Fibonacci heaps [FT
- Page 357 and 358: Suffix Trees and Arrays Figure: A t
- Page 359 and 360: Suffix Trees and Arrays [GBY91]. Se
- Page 361 and 362: Graph Data Structures algorithms).
- Page 363 and 364: Graph Data Structures including the
- Page 365 and 366: Set Data Structures Next: Kd-Trees
- Page 367 and 368: Set Data Structures parent pointers
- Page 369 and 370: Kd-Trees Next: Numerical Problems U
- Page 371 and 372: Kd-Trees about p. Say we are lookin
- Page 373 and 374: Numerical Problems Next: Solving Li
- Page 375 and 376: Numerical Problems Mon Jun 2 23:33:
- Page 377 and 378: Solving Linear Equations algorithm
- Page 379 and 380: Solving Linear Equations Matrix inv
- Page 381 and 382: Bandwidth Reduction images near eac
- Page 383 and 384: Matrix Multiplication Next: Determi
- Page 385 and 386: Matrix Multiplication The linear al
- Page 387 and 388: Determinants and Permanents Next: C
- Page 389 and 390: Determinants and Permanents exposit
- Page 391: Constrained and Unconstrained Optim
- Page 395 and 396: Linear Programming variable assignm
- Page 397 and 398: Linear Programming The book [MW93]
- Page 399 and 400: Random Number Generation Next: Fact
- Page 401 and 402: Random Number Generation is largely
- Page 403 and 404: Random Number Generation Related Pr
- Page 405 and 406: Factoring and Primality Testing The
- Page 407 and 408: Factoring and Primality Testing Alg
- Page 409 and 410: Arbitrary-Precision Arithmetic If y
- Page 411 and 412: Arbitrary-Precision Arithmetic PARI
- Page 413 and 414: Knapsack Problem Next: Discrete Fou
- Page 415 and 416: Knapsack Problem is a subset of S'
- Page 417 and 418: Discrete Fourier Transform Next: Co
- Page 419 and 420: Discrete Fourier Transform an algor
- Page 421 and 422: Combinatorial Problems Next: Sortin
- Page 423 and 424: Sorting Next: Searching Up: Combina
- Page 425 and 426: Sorting The simplest approach to ex
- Page 427 and 428: Sorting operations, implying an sor
- Page 429 and 430: Searching large performance improve
- Page 431 and 432: Searching Notes: Mehlhorn and Tsaka
- Page 433 and 434: Median and Selection followed by fi
- Page 435 and 436: Generating Permutations Next: Gener
- Page 437 and 438: Generating Permutations The rank/un
- Page 439 and 440: Generating Permutations generating
- Page 441 and 442: Generating Subsets look right when
Constrained and Unconstrained Optimization<br />
● How smooth is my function? <strong>The</strong> main difficulty of global optimization is getting trapped in local<br />
optima. Consider the problem of finding the highest point in a mountain range. If there is only<br />
one mountain and it is nicely shaped, we can find the top by just walking in whatever direction is<br />
up. However, if there are many false summits or other mountains in the area, it is difficult to<br />
convince ourselves whether we are really at the highest point. Smoothness is the property that<br />
enables us to quickly find the local optimum from a given point. We assume smoothness in<br />
seeking the peak of the mountain by walking up. If the height at any given point was a completely<br />
random function, there would be no way we could find the optimum height short of sampling<br />
every single point.<br />
Efficient algorithms for unconstrained global optimization use derivatives and partial derivatives to find<br />
local optima, to point out the direction in which moving from the current point does the most to increase<br />
or decrease the function. Such derivatives can sometimes be computed analytically, or they can be<br />
estimated numerically by taking the difference between values of nearby points. A variety of steepest<br />
descent and conjugate gradient methods to find local optima have been developed, similar in many ways<br />
to numerical root-finding algorithms.<br />
It is a good idea to try out several different methods on any given optimization problem. For this reason,<br />
we recommend experimenting with the implementations below before attempting to implement your own<br />
method. Clear descriptions of these algorithms are provided in several numerical algorithms books, in<br />
particular [PFTV86].<br />
For constrained optimization, finding points that satisfy all the constraints is often the difficult problem.<br />
One approach is to use a method for unconstrained optimization, but add a penalty according to how<br />
many constraints are violated. Determining the right penalty function is problem-specific, but it often<br />
makes sense to vary the penalties as optimization proceeds. At the end, the penalties should be very high<br />
to ensure that all constraints are satisfied.<br />
Simulated annealing is a fairly robust and simple approach to constrained optimization, particularly when<br />
we are optimizing over combinatorial structures (permutations, graphs, subsets) instead of continuous<br />
functions. Techniques for simulated annealing are described in Section .<br />
Implementations: Several of the Collected <strong>Algorithm</strong>s of the ACM are Fortran codes for unconstrained<br />
optimization, most notably <strong>Algorithm</strong> 566 [MGH81], <strong>Algorithm</strong> 702 [SF92], and <strong>Algorithm</strong> 734<br />
[Buc94]. <strong>Algorithm</strong> 744 [Rab95] does unconstrained optimization in Lisp. <strong>The</strong>y are available from<br />
Netlib (see Section ). Also check out the selection at GAMS, the NIST Guide to Available<br />
Mathematical Software, at http://gams.nist.gov.<br />
NEOS (Network-Enabled Optimization System) provides a unique service, the opportunity to solve your<br />
problem on computers and software at Argonne National Laboratory, over the WWW. Linear<br />
programming and unconstrained optimization are both supported. This is worth checking out at<br />
file:///E|/BOOK/BOOK3/NODE140.HTM (3 of 4) [19/1/2003 1:30:21]