11.12.2012 Views

Chapter 5 Robust Performance Tailoring with Tuning - SSL - MIT

Chapter 5 Robust Performance Tailoring with Tuning - SSL - MIT

Chapter 5 Robust Performance Tailoring with Tuning - SSL - MIT

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Appendix A<br />

Gradient-Based Optimization<br />

The goal of a general unconstrained nonlinear programming optimization problem is<br />

to minimize a convex objective function, f, over a set of design variables, x defined<br />

in the set of real numbers, R n :<br />

min f (x) (A.1)<br />

x∈Rn One way to find the optimal set of parameters, x ∗ , is to begin at some initial iterate,<br />

x0 and search the design space by finding successive iterates, xk that reduce the<br />

objective funcion. A general form for the iterate is:<br />

xk+1 = xk + αkdk<br />

(A.2)<br />

where k denotes the iteration number, αk is the stepsize and dk is a serach direction.<br />

It is the choice of the search direction that distinguishes one optimization algorithm<br />

from another. In gradient-based optimizations d is chosen based on the gradient of<br />

the cost function at each iteration. There is a large body of work on this subject<br />

and many algorithms from which to choose. The choice of algorithm depends on the<br />

features of the given problem formulation. Bertsekas provides a detailed overview of<br />

many popular gradient methods in [16]. In this appendix, three popular gradient-<br />

based algorithms are briefly reviewed.<br />

219

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!