17.01.2015 Views

GPS-X Technical Reference

GPS-X Technical Reference

GPS-X Technical Reference

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

391 Optimizer<br />

Note that the user can set the heteroscedasticity factors or have <strong>GPS</strong>-X estimate their<br />

optimal values for you.<br />

As process models are generally nonlinear, the parameter values that maximize<br />

Equation 14.12 cannot be determined analytically. An iterative optimization method is<br />

required to maximize Equation 14.12. As mentioned earlier in this chapter, <strong>GPS</strong>-X uses<br />

the Nelder and Mead simplex method (Press et al., 1986) for optimization. This method<br />

is a type of direct search algorithm that does not require the calculation of partial<br />

derivatives. This is helpful when estimating parameters in systems of differential<br />

equations. The simplex method also has the advantage that it can handle objective<br />

functions containing discontinuities. This method is generally slower than derivativebased<br />

optimization methods, but can handle a greater variety of objective functions and<br />

is often found to be quite robust in finding solutions. The version of the simplex method<br />

implemented in <strong>GPS</strong>-X allows for bounds to be placed on the parameters.<br />

Due to the fact that the simplex method is designed for minimization, <strong>GPS</strong>-X minimizes<br />

the negative of Equation 14.12 to determine the optimal parameter estimates.<br />

Sum of Squares Objective Function<br />

When there is only one target variable and the variance is constant across all observations<br />

(i.e. the heteroscedasticity parameter is zero), the sum of squares objective function is<br />

equivalent to the maximum likelihood objective function. For problems with more than<br />

one target variable, the sum of squares objective function is a special case of the<br />

maximum likelihood objective function that results if we make additional assumptions<br />

about the measurement errors. Our maximum likelihood objective function, given by<br />

Equation 14.12, is derived using the following assumptions:<br />

<br />

<br />

<br />

The measurement errors are normally distributed random variables with a<br />

mean of zero.<br />

For each target variable, the measurement errors are independent from<br />

observation to observation. Each target variable has its own variance which<br />

varies from observation to observation according to a power-law. The<br />

variances are unknown and are calculated as part of the optimization process.<br />

There is no correlation between different target variables<br />

If we make the additional assumptions that all of the variances are equal (i.e. all<br />

responses have same variance) and the variances do not change from observation to<br />

observation (i.e. the heteroscedasticities are all zero) then the maximum likelihood<br />

function reduces to the sum of squares objective function in the multi-response case. The<br />

assumptions used to derive the sum of squares objective function in the multi-response<br />

case do not apply in most practical situations. Therefore, it is recommended that the<br />

maximum likelihood objective function be used for calibration problems with more than<br />

one target variable.<br />

<strong>GPS</strong>-X <strong>Technical</strong> <strong>Reference</strong>

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!