The.Algorithm.Design.Manual.Springer-Verlag.1998
The.Algorithm.Design.Manual.Springer-Verlag.1998 The.Algorithm.Design.Manual.Springer-Verlag.1998
Constrained and Unconstrained Optimization Next: Linear Programming Up: Numerical Problems Previous: Determinants and Permanents Constrained and Unconstrained Optimization Input description: A function . Problem description: What point maximizes (or minimizes) the function f? Discussion: Most of this book concerns algorithms that optimize one thing or another. This section considers the general problem of optimizing functions where, due to lack of structure or knowledge, we are unable to exploit the problem-specific algorithms seen elsewhere in this book. Optimization arises whenever there is an objective function that must be tuned for optimal performance. Suppose we are building a program to identify good stocks to invest in. We have available certain financial data to analyze, such as the price-earnings ratio, the interest and inflation rates, and the stock price, all as a function of time t. The key question is how much weight we should give to each of these factors, where these weights correspond to coefficents of a formula: We seek the numerical values , , , whose stock-goodness function does the best job of evaluating stocks. Similar issues arise in tuning evaluation functions for game playing programs such as chess. file:///E|/BOOK/BOOK3/NODE140.HTM (1 of 4) [19/1/2003 1:30:21]
Constrained and Unconstrained Optimization Unconstrained optimization problems also arise in scientific computation. Physical systems from protein structures to particles naturally seek to minimize their ``energy functions.'' Thus programs that attempt to simulate nature often define energy potential functions for the possible configurations of objects and then take as the ultimate configuration the one that minimizes this potential. Global optimization problems tend to be hard, and there are lots of ways to go about them. Ask the following questions to steer yourself in the right direction: ● Am I doing constrained or unconstrained optimization? - In unconstrained optimization, there are no limitations on the values of the parameters other than that they maximize the value of f. Often, however, there are costs or constraints on these parameters. These constraints make certain points illegal, points that might otherwise be the global optimum. Constrained optimization problems typically require mathematical programming approaches like linear programming, discussed in Section . ● Is the function I am trying to optimize described by a formula or data? - If the function that you seek to optimize is presented as an algebraic formula (such as the minimum of ), the solution is to analytically take its derivative f'(n) and see for which points p' we have f'(p') = 0. These points are either local maxima or minima, which can be distinguished by taking a second derivative or just plugging back into f and seeing what happens. Symbolic computation systems such as Mathematica and Maple are fairly effective at computing such derivatives, although using computer algebra systems effectively is somewhat of a black art. They are definitely worth a try, however, and you can always use them to plot a picture of your function to get a better idea of what you are dealing with. ● How expensive is it to compute the function at a given point? - If the function f is not presented as a formula, what to do depends upon what is given. Typically, we have a program or subroutine that evaluates f at a given point, and so can request the value of any given point on demand. By calling this function, we can poke around and try to guess the maxima. Our freedom to search in such a situation depends upon how efficiently we can evaluate f. If f is just a complicated formula, evaluation will be very fast. But suppose that f represents the effect of the coefficients on the performance of the board evaluation function in a computer chess program, such that is how much a pawn is worth, is how much a bishop is worth, and so forth. To evaluate a set of coefficients as a board evaluator, we must play a bunch of games with it or test it on a library of known positions. Clearly, this is time-consuming, so we must be frugal in the number of evaluations of f we use. ● How many dimensions do we have? How many do we need? - The difficulty in finding a global maximum increases rapidly with the number of dimensions (or parameters). For this reason, it often pays to reduce the dimension by ignoring some of the parameters. This runs counter to intuition, for the naive programmer is likely to incorporate as many variables as possible into their evaluation function. It is just too hard to tweak such a complicated function. Much better is to start with the 3 to 5 seemingly most important variables and do a good job optimizing the coefficients for these. file:///E|/BOOK/BOOK3/NODE140.HTM (2 of 4) [19/1/2003 1:30:21]
- Page 339 and 340: The Euclidean Traveling Salesman Ne
- Page 341 and 342: The Euclidean Traveling Salesman Ne
- Page 343 and 344: Exercises 1. Prove that the low deg
- Page 345 and 346: Data Structures Next: Dictionaries
- Page 347 and 348: Dictionaries Next: Priority Queues
- Page 349 and 350: Dictionaries use a function that ma
- Page 351 and 352: Dictionaries Implementation-oriente
- Page 353 and 354: Priority Queues ● Besides access
- Page 355 and 356: Priority Queues Fibonacci heaps [FT
- Page 357 and 358: Suffix Trees and Arrays Figure: A t
- Page 359 and 360: Suffix Trees and Arrays [GBY91]. Se
- Page 361 and 362: Graph Data Structures algorithms).
- Page 363 and 364: Graph Data Structures including the
- Page 365 and 366: Set Data Structures Next: Kd-Trees
- Page 367 and 368: Set Data Structures parent pointers
- Page 369 and 370: Kd-Trees Next: Numerical Problems U
- Page 371 and 372: Kd-Trees about p. Say we are lookin
- Page 373 and 374: Numerical Problems Next: Solving Li
- Page 375 and 376: Numerical Problems Mon Jun 2 23:33:
- Page 377 and 378: Solving Linear Equations algorithm
- Page 379 and 380: Solving Linear Equations Matrix inv
- Page 381 and 382: Bandwidth Reduction images near eac
- Page 383 and 384: Matrix Multiplication Next: Determi
- Page 385 and 386: Matrix Multiplication The linear al
- Page 387 and 388: Determinants and Permanents Next: C
- Page 389: Determinants and Permanents exposit
- Page 393 and 394: Constrained and Unconstrained Optim
- Page 395 and 396: Linear Programming variable assignm
- Page 397 and 398: Linear Programming The book [MW93]
- Page 399 and 400: Random Number Generation Next: Fact
- Page 401 and 402: Random Number Generation is largely
- Page 403 and 404: Random Number Generation Related Pr
- Page 405 and 406: Factoring and Primality Testing The
- Page 407 and 408: Factoring and Primality Testing Alg
- Page 409 and 410: Arbitrary-Precision Arithmetic If y
- Page 411 and 412: Arbitrary-Precision Arithmetic PARI
- Page 413 and 414: Knapsack Problem Next: Discrete Fou
- Page 415 and 416: Knapsack Problem is a subset of S'
- Page 417 and 418: Discrete Fourier Transform Next: Co
- Page 419 and 420: Discrete Fourier Transform an algor
- Page 421 and 422: Combinatorial Problems Next: Sortin
- Page 423 and 424: Sorting Next: Searching Up: Combina
- Page 425 and 426: Sorting The simplest approach to ex
- Page 427 and 428: Sorting operations, implying an sor
- Page 429 and 430: Searching large performance improve
- Page 431 and 432: Searching Notes: Mehlhorn and Tsaka
- Page 433 and 434: Median and Selection followed by fi
- Page 435 and 436: Generating Permutations Next: Gener
- Page 437 and 438: Generating Permutations The rank/un
- Page 439 and 440: Generating Permutations generating
Constrained and Unconstrained Optimization<br />
Unconstrained optimization problems also arise in scientific computation. Physical systems from<br />
protein structures to particles naturally seek to minimize their ``energy functions.'' Thus programs that<br />
attempt to simulate nature often define energy potential functions for the possible configurations of<br />
objects and then take as the ultimate configuration the one that minimizes this potential.<br />
Global optimization problems tend to be hard, and there are lots of ways to go about them. Ask the<br />
following questions to steer yourself in the right direction:<br />
● Am I doing constrained or unconstrained optimization? - In unconstrained optimization, there<br />
are no limitations on the values of the parameters other than that they maximize the value of f.<br />
Often, however, there are costs or constraints on these parameters. <strong>The</strong>se constraints make certain<br />
points illegal, points that might otherwise be the global optimum. Constrained optimization<br />
problems typically require mathematical programming approaches like linear programming,<br />
discussed in Section .<br />
● Is the function I am trying to optimize described by a formula or data? - If the function that you<br />
seek to optimize is presented as an algebraic formula (such as the minimum of<br />
), the solution is to analytically take its derivative f'(n) and see for which<br />
points p' we have f'(p') = 0. <strong>The</strong>se points are either local maxima or minima, which can be<br />
distinguished by taking a second derivative or just plugging back into f and seeing what happens.<br />
Symbolic computation systems such as Mathematica and Maple are fairly effective at<br />
computing such derivatives, although using computer algebra systems effectively is somewhat of<br />
a black art. <strong>The</strong>y are definitely worth a try, however, and you can always use them to plot a<br />
picture of your function to get a better idea of what you are dealing with.<br />
● How expensive is it to compute the function at a given point? - If the function f is not presented as<br />
a formula, what to do depends upon what is given. Typically, we have a program or subroutine<br />
that evaluates f at a given point, and so can request the value of any given point on demand. By<br />
calling this function, we can poke around and try to guess the maxima. Our freedom to search in<br />
such a situation depends upon how efficiently we can evaluate f. If f is just a complicated formula,<br />
evaluation will be very fast. But suppose that f represents the effect of the coefficients<br />
on the performance of the board evaluation function in a computer chess program, such that is<br />
how much a pawn is worth, is how much a bishop is worth, and so forth. To evaluate a set of<br />
coefficients as a board evaluator, we must play a bunch of games with it or test it on a library of<br />
known positions. Clearly, this is time-consuming, so we must be frugal in the number of<br />
evaluations of f we use.<br />
● How many dimensions do we have? How many do we need? - <strong>The</strong> difficulty in finding a global<br />
maximum increases rapidly with the number of dimensions (or parameters). For this reason, it<br />
often pays to reduce the dimension by ignoring some of the parameters. This runs counter to<br />
intuition, for the naive programmer is likely to incorporate as many variables as possible into their<br />
evaluation function. It is just too hard to tweak such a complicated function. Much better is to<br />
start with the 3 to 5 seemingly most important variables and do a good job optimizing the<br />
coefficients for these.<br />
file:///E|/BOOK/BOOK3/NODE140.HTM (2 of 4) [19/1/2003 1:30:21]