What Is Optimization Toolbox?

What Is Optimization Toolbox? What Is Optimization Toolbox?

cda.psych.uiuc.edu
from cda.psych.uiuc.edu More from this publisher
12.07.2015 Views

2 TutorialTo restrict x inEquation2-2tobegreaterthan0(i.e.,the commands), usex0 = [-1,1]; % Make a starting guess at the solutionlb = [0,0]; % Set lower boundsub = [ ];% No upper boundsoptions = optimset('LargeScale','off');[x,fval] = ...fmincon(@objfun,x0,[],[],[],[],lb,ub,@confun,options)[c, ceq] = confun(x)Note that to pass in the lower bounds as the seventh argument to fmincon,you must specify values for the third through sixth arguments. In thisexample, we specified [] for these arguments since there are no linearinequalities or linear equalities.After 13 function evaluations, the solution produced isx =0 1.5000fval =8.5000c =0-10ceq =[]When lb or ub contains fewer elements than x, only the first correspondingelements in x are bounded. Alternatively, if only some of the variables arebounded, then use -inf in lb for unbounded below variables and inf in ub forunbounded above variables. For example,lb = [-inf 0];ub = [10 inf];bounds has no lower bound, and has no upper bound.Using inf and -inf give better numerical results than using a very largepositive number or a very large negative number to imply lack of bounds.2-14

Examples That Use Standard AlgorithmsNote that the number of function evaluations to find the solution is reducedbecause we further restricted the search space. Fewer function evaluationsare usually taken when a problem has more constraints and bound limitationsbecause the optimization makes better decisions regarding step size andregions of feasibility than in the unconstrained case. It is, therefore, goodpractice to bound and constrain problems, where possible, to promote fastconvergence to a solution.Constrained Example with GradientsOrdinarily the medium-scale minimization routines use numerical gradientscalculated by finite-difference approximation. This procedure systematicallyperturbs each of the variables in order to calculate function and constraintpartial derivatives. Alternatively, you can provide a function to computepartial derivatives analytically. Typically, the problem is solved moreaccurately and efficiently if such a function is provided.To solve Equation 2-2 using analytically determined gradients, do thefollowing.Step 1: Write an M-file for the objective function and gradient.function [f,G] = objfungrad(x)f = exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+2*x(2)+1);% Gradient of the objective functiont = exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+2*x(2)+1);G = [ t + exp(x(1)) * (8*x(1) + 4*x(2)),exp(x(1))*(4*x(1)+4*x(2)+2)];Step 2: Write an M-file for the nonlinear constraints and thegradients of the nonlinear constraints.function [c,ceq,DC,DCeq] = confungrad(x)c(1) = 1.5 + x(1) * x(2) - x(1) - x(2); %Inequality constraintsc(2) = -x(1) * x(2)-10;% Gradient of the constraintsDC= [x(2)-1, -x(2);x(1)-1, -x(1)];% No nonlinear equality constraints2-15

Examples That Use Standard AlgorithmsNote that the number of function evaluations to find the solution is reducedbecause we further restricted the search space. Fewer function evaluationsare usually taken when a problem has more constraints and bound limitationsbecause the optimization makes better decisions regarding step size andregions of feasibility than in the unconstrained case. It is, therefore, goodpractice to bound and constrain problems, where possible, to promote fastconvergence to a solution.Constrained Example with GradientsOrdinarily the medium-scale minimization routines use numerical gradientscalculated by finite-difference approximation. This procedure systematicallyperturbs each of the variables in order to calculate function and constraintpartial derivatives. Alternatively, you can provide a function to computepartial derivatives analytically. Typically, the problem is solved moreaccurately and efficiently if such a function is provided.To solve Equation 2-2 using analytically determined gradients, do thefollowing.Step 1: Write an M-file for the objective function and gradient.function [f,G] = objfungrad(x)f = exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+2*x(2)+1);% Gradient of the objective functiont = exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+2*x(2)+1);G = [ t + exp(x(1)) * (8*x(1) + 4*x(2)),exp(x(1))*(4*x(1)+4*x(2)+2)];Step 2: Write an M-file for the nonlinear constraints and thegradients of the nonlinear constraints.function [c,ceq,DC,DCeq] = confungrad(x)c(1) = 1.5 + x(1) * x(2) - x(1) - x(2); %Inequality constraintsc(2) = -x(1) * x(2)-10;% Gradient of the constraintsDC= [x(2)-1, -x(2);x(1)-1, -x(1)];% No nonlinear equality constraints2-15

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!