15.05.2015 Views

Introduction to Unconstrained Optimization - Scilab

Introduction to Unconstrained Optimization - Scilab

Introduction to Unconstrained Optimization - Scilab

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Proof. We assume that the Hessian matrix is positive definite. Let us prove that<br />

the function f is convex. Since f is twice continuously differentiable, we can use the<br />

following Taylor expansion of f. By the proposition 1.7, there exists a θ satisfying<br />

0 ≤ θ ≤ 1 so that<br />

f(y) = f(x) + g(x ⋆ ) T (y − x) + 1 2 (y − x)T H(x + θ(y − x))(y − x), (51)<br />

for any x, y ∈ C. Since H is positive definite, the scalar (y−x) T H(x+θ(y−x))(y−x)<br />

is positive, which leads <strong>to</strong> the inequality<br />

f(y) ≥ f(x) + g(x ⋆ ) T (y − x). (52)<br />

By proposition 2.5, this proves that f is convex on C.<br />

Assume that f is convex on the convex set C and let us prove that the Hessian<br />

matrix is positive definite. We assume that the Hessian matrix is not positive definite<br />

and show that it leads <strong>to</strong> a contradiction. The hypothesis that the Hessian matrix<br />

is not positive definite implies that there exists a point x ∈ C vec<strong>to</strong>r p ∈ C such<br />

that p T H(x)p < 0. Let us define y = x + p. By the proposition 1.7, there exists<br />

a θ satisfying 0 ≤ θ ≤ 1 so that the equality 51 holds. Since C is open and f is<br />

twice continuously differentiable, this inequality is also true in a neighbourhood of<br />

x. More formaly, this implies that we can choose the vec<strong>to</strong>r p <strong>to</strong> be small enough<br />

such that p T H(x + θp)p < 0 for any θ ∈ [0, 1]. Therefore, the equality 51 implies<br />

f(y) < f(x) + g(x ⋆ ) T (y − x). (53)<br />

By proposition 2.5, the previous inequality contradicts the hypothesis that f is<br />

convex. Therefore, the Hessian matrix H is positive definite, which concludes the<br />

proof.<br />

2.5 Examples of convex functions<br />

In this section, we give several examples of univariate and multivariate convex functions.<br />

We also give an example of a nonconvex function which defines a convex<br />

set.<br />

Example 2.1 Consider the quadratic function f : R n → R given by<br />

f(x) = f 0 + g T x + 1 2 xT Ax, (54)<br />

where f 0 ∈ R, g ∈ R n and A ∈ R n×n . For any x ∈ R n , the Hessian matrix H(x) is<br />

equal <strong>to</strong> the matrix A. Therefore, the quadratic function f is convex if and only if<br />

the matrix A is positive definite. The quadratic function f is strictly convex if and<br />

only if the matrix A is strictly positive definite. We will review quadratic functions<br />

in more depth in the next section.<br />

There are many other examples of convex functions. Obviously, any linear function<br />

is convex. The exp(ax) function is convex on R, for any a ∈ R. The function x a<br />

is convex on R for any a > 0. The function − log(x) is convex on R ++ = {x > 0}.<br />

The function x log(x) is convex on R ++ . The following script produces the two plots<br />

presented in figure 19.<br />

25

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!