15.05.2015 Views

Introduction to Unconstrained Optimization - Scilab

Introduction to Unconstrained Optimization - Scilab

Introduction to Unconstrained Optimization - Scilab

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

1 Overview<br />

In this section, we analyze optimization problems and define the associated vocabulary.<br />

We introduce level sets and separate local and global optimums. We emphasize<br />

the use of con<strong>to</strong>ur plots in the context of unconstrained and constrained optimization.<br />

1.1 Classification of optimization problems<br />

In this document, we consider optimization problems in which we try <strong>to</strong> minimize a<br />

cost function<br />

min f(x) (1)<br />

x∈Rn with or without constraints. Several properties of the problem <strong>to</strong> solve may be taken<br />

in<strong>to</strong> account by the numerical algorithms:<br />

• The unknown may be a vec<strong>to</strong>r of real or integer values.<br />

• The number of unknowns may be small (from 1 <strong>to</strong> 10 - 100), medium (from<br />

10 <strong>to</strong> 100 - 1 000) or large (from 1 000 - 10 000 and above), leading <strong>to</strong> dense<br />

or sparse linear systems.<br />

• There may be one or several cost functions (multi-objective optimization).<br />

• The cost function may be smooth or non-smooth.<br />

• There may be constraints or no constraints.<br />

• The constraints may be bounds constraints, linear or non-linear constraints.<br />

• The cost function can be linear, quadratic or a general non linear function.<br />

An overview of optimization problems is presented in figure 1. In this document,<br />

we will be concerned mainly with continuous parameters and problems with one<br />

objective function only. From that point, smooth and nonsmooth problems require<br />

very different numerical methods. It is generally believed that equality constrained<br />

optimization problems are easier <strong>to</strong> solve than inequality constrainted problems.<br />

This is because computing the set of active constraints at optimum is a difficult<br />

problem.<br />

The size of the problem, i.e. the number n of parameters, is also of great concern<br />

with respect <strong>to</strong> the design of optimization algorithms. Obviously, the rate of convergence<br />

is of primary interest when the number of parameters is large. Algorithms<br />

which require n iterations, like BFGS’s methods for example, will not be efficient<br />

if the number of iterations <strong>to</strong> perform is much less than n. Moreover, several algorithms<br />

(like New<strong>to</strong>n’s method for example), require <strong>to</strong>o much s<strong>to</strong>rage <strong>to</strong> be of<br />

a practical value when n is so large that n × n matrices cannot be s<strong>to</strong>red. Fortunately,<br />

good algorithms such as conjugate gradient methods and BFGS with limited<br />

memory methods are specifically designed for this purpose.<br />

4

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!