06.01.2015 Views

The Steepest Descent Algorithm for Unconstrained Optimization and ...

The Steepest Descent Algorithm for Unconstrained Optimization and ...

The Steepest Descent Algorithm for Unconstrained Optimization and ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

where ( ) ( )<br />

+4 −2 +2<br />

Q =<br />

<strong>and</strong> q = .<br />

−2 +2<br />

−2<br />

<strong>The</strong>n ( )( ) ( )<br />

+4 −2 x +2<br />

∇f (x) =<br />

1<br />

+<br />

−2 +2 x 2 −2<br />

<strong>and</strong> so ( )<br />

∗ 0<br />

x =<br />

1<br />

<strong>and</strong><br />

f (x ∗ )= −1.<br />

√<br />

Direct computation<br />

√<br />

shows that the eigenvalues of Q are A =3 + 5<strong>and</strong><br />

a = 3 − 5, whereby the bound on the convergence constant is<br />

(<br />

A/a − 1 ) 2<br />

δ = ≤ 0.556.<br />

A/a +1<br />

Suppose that x 0 =(0, 0). <strong>The</strong>n we have:<br />

x 1 =(−0.4, 0.4), x 2 =(0, 0.8)<br />

<strong>and</strong> the even numbered iterates satisfy<br />

x 2n =(0, 1 − 0.2 n ) <strong>and</strong> f (x 2n )=(1 − 0.2 n ) 2 − 2 + 2(0.2) n<br />

<strong>and</strong> so<br />

∗<br />

f (x 2n ) − f (x )=(0.2) 2n .<br />

<strong>The</strong>re<strong>for</strong>e, starting from the point x 0 =(0, 0), the optimality gap goes down<br />

by a factor of 0.04 after every two iterations of the algorithm.<br />

5 Final<br />

Remarks<br />

• <strong>The</strong> proof of convergence is basic in that it only relies on fundamental<br />

properties of the continuity of the gradient <strong>and</strong> on the closedness <strong>and</strong><br />

boundedness of the level sets of the function f (x).<br />

• <strong>The</strong> analysis of the rate of convergence is quite elegant.<br />

16

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!