Introduction to Unconstrained Optimization - Scilab
Introduction to Unconstrained Optimization - Scilab
Introduction to Unconstrained Optimization - Scilab
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
2.0<br />
1.5<br />
20<br />
1.0<br />
0.5<br />
10<br />
0.0<br />
5<br />
2<br />
0.3 0.3 2<br />
5<br />
-0.5<br />
10<br />
-1.0<br />
20<br />
-1.5<br />
-2.0<br />
-2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0<br />
Figure 25: Con<strong>to</strong>ur of a quadratic function – One eigenvalue is zero, the other is<br />
positive.<br />
H = [4 2; 2 1]<br />
f = x.’ * H * x;<br />
endfunction<br />
x = linspace ( -2 ,2 ,100);<br />
y = linspace ( -2 ,2 ,100);<br />
con<strong>to</strong>ur ( x , y , quadraticindef , [0.3 2 5 10 20])<br />
The con<strong>to</strong>ur is typical of a weak local optimum. Notice that the function remains<br />
constant along the eigenvec<strong>to</strong>r corresponding with the zero eigenvalue.<br />
Example 2.5 (Example where there is no stationnary point.) Assume that b =<br />
(1, 0) T and the Hessian matrix is<br />
( ) 0 0<br />
H =<br />
(60)<br />
0 1<br />
The function can be simplified as f(x) = x 1 + 1 2 x2 2. The gradient of the function<br />
is g(x) = (1, x 2 ) T . There is no stationnary point, which implies that the function<br />
is unbounded. The following script produces the con<strong>to</strong>urs of the corresponding<br />
quadratic function which are presented in the figure 26.<br />
function f = quadraticincomp ( x1 , x2 )<br />
x = [x1 x2]’<br />
H = [0 0;0 1]<br />
b = [1;0]<br />
f = x.’*b + x.’ * H * x;<br />
endfunction<br />
x = linspace ( -10 ,10 ,100);<br />
y = linspace ( -10 ,10 ,100);<br />
con<strong>to</strong>ur ( x , y , quadraticincomp , [ -10 -5 0 5 10 20])<br />
32