12.07.2015 Views

What Is Optimization Toolbox?

What Is Optimization Toolbox?

What Is Optimization Toolbox?

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Large-Scale ExamplesF = 2 + 2*k-exp(k*x(1))-exp(k*x(2));Step 2: Call the nonlinear least-squares routine.x0 = [0.3 0.4]% Starting guess[x,resnorm] = lsqnonlin(@myfun,x0) % Invoke optimizerBecause the Jacobian is not computed in myfun.m, and no Jacobiansparsity pattern is provided by the JacobPattern option in options,lsqnonlin calls the large-scale method with JacobPattern set toJstr = sparse(ones(10,2)). Thisisthedefaultforlsqnonlin. Notethatthe Jacobian option in options is set to 'off' by default.When the finite-differencing routine is called the first time, it detects thatJstr is actually a dense matrix, i.e., that no speed benefit is derived fromstoring it as a sparse matrix. From then on the finite-differencing routineuses Jstr = ones(10,2) (a full matrix) for the optimization computations.After about 24 function evaluations, this example gives the solutionx =0.2578 0.2578resnorm % Residual or sum of squaresresnorm =124.3622Most computer systems can handle much larger full problems, say into the100s of equations and variables. But if there is some sparsity structure in theJacobian (or Hessian) that can be taken advantage of, the large-scale methodswill always run faster if this information is provided.Nonlinear Minimization with Gradient and HessianThis example involves solving a nonlinear minimization problem with atridiagonal Hessian matrix H(x) first computed explicitly, and then byproviding the Hessian’s sparsity structure for the finite-differencing routine.2-51

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!