11.12.2012 Views

Chapter 5 Robust Performance Tailoring with Tuning - SSL - MIT

Chapter 5 Robust Performance Tailoring with Tuning - SSL - MIT

Chapter 5 Robust Performance Tailoring with Tuning - SSL - MIT

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

such that the gradient direction is almost orthogonal to the direction leading to the<br />

minimum.<br />

A.1.1 Stepsize Selection<br />

There many possible choices for the stepsize, αk. Perhaps one of the simplest is the<br />

constant stepsize, in which a fixed stepsize, s>0, is selected for all iterates:<br />

αk = s, k =0, 1,... (A.7)<br />

Although easy to implement, a constant stepsize can cause the algorithm to diverge<br />

if s is too large or result in very slow convergence if s is too small.<br />

Another relatively simple rule is the diminishing stepsize in which αk converges<br />

to zero <strong>with</strong> each successive iteration:<br />

αk → 0 (A.8)<br />

This rule does not guarantee descent and it is possible that the stepsize may become<br />

so small that progress is effectively halted before an optimum is reached. To prevent<br />

this from happening the following condition is imposed:<br />

∞�<br />

αk = ∞ (A.9)<br />

k=0<br />

αk = α0<br />

√k<br />

(A.10)<br />

Equation A.10 is a possible choice for a decreasing stepsize rule. Convergence <strong>with</strong><br />

this rule is also known to be slow.<br />

A more complicated but better-behaved method is the limited minimization rule:<br />

f (xk + αkdk) = min<br />

α∈[0,s] f (xk + αdk) (A.11)<br />

In this method, αk is chosen such that the cost function is minimized along the<br />

direction dk. A fixed positive scalar, s, may be chosen to limit the possible size of<br />

221

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!