11.12.2012 Views

Chapter 5 Robust Performance Tailoring with Tuning - SSL - MIT

Chapter 5 Robust Performance Tailoring with Tuning - SSL - MIT

Chapter 5 Robust Performance Tailoring with Tuning - SSL - MIT

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

major iteration an approximation of the Hessian of the Lagrangian function is made<br />

using the quasi-Newton updating method of Broyden, Fletcher, Goldfarb and Shanno<br />

(BFGS). The Hessian approximation is then used along <strong>with</strong> first-order Taylor series<br />

approximations of the nonlinear constraints to generate a quadratic programming<br />

(QP) subproblem:<br />

min<br />

d∈Rn 1<br />

2 dT Hkd + ∇f (xk) T d<br />

∇gi (xk) T d + gi (xk) =0 i =1,...me (2.40)<br />

∇gi (xk) T d + gi (xk) ≤ 0 i = me +1,...m<br />

where H is the Hessian approximation, gi are the constraint equations and m and me<br />

are the total number of constraints and the number of equality constraints, respec-<br />

tively. The solution, dk, is obtained through an active set quadratic programming<br />

strategy and is used to form a new major iterate:<br />

xk+1 = xk + αkdk<br />

(2.41)<br />

The step length, αk is found through a line search that requires sufficient decrease in<br />

a particular merit function.<br />

If a problem is well-behaved and properly scaled, then gradient-based algorithms,<br />

such as SQP, are likely to find a global optimum as long as the objective function and<br />

constraints are convex. However, if the problem is non-convex, i.e. there are multiple<br />

solutions in the space that are locally optimal, then the algorithm may converge to a<br />

local minima, instead of the global one. In fact, the solution that is obtained depends<br />

on the initial guess chosen by the user. Non-convexity poses a difficult problem since<br />

there is no known way to prove definitively that a global optimum has been found<br />

instead of simply a local one. Therefore, there is an entire body of heuristic methods<br />

that can be employed to search for a global optimum. One simple heuristic is to<br />

randomly chose some number of initial guesses and run SQP, or some other gradient-<br />

based algorithm, from each of these starting points. Each resulting solution is then<br />

56

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!