Chapter 5 Robust Performance Tailoring with Tuning - SSL - MIT

Chapter 5 Robust Performance Tailoring with Tuning - SSL - MIT Chapter 5 Robust Performance Tailoring with Tuning - SSL - MIT

11.12.2012 Views

formance. This worst-case performance is then the objective function for the outer tailoring optimization. In effect, anti-optimization is analogous to performance tai- loring for worst-case, instead of nominal, performance. In order to use a gradient-based optimization algorithm to solve the anti-optimization problem efficiently analytical gradients of the objective are required. These gradients are difficult to obtain given the form of Equation 3.4. As the tailoring parameters change, the worst-case uncertainty vector may move from one vertex to another caus- ing a discontinuity in the gradient. If the objective and constraints are linear, then the problem can be solved with a linear programming algorithm and the disconti- nuities do not cause a problem. However, if a quadratic approximation algorithm, such as SQP, is applied to a problem with nonlinear objectives and/or constraints, the discontinuity causes the optimization tomisbehaveandsearchinefficiently. The problem can be formulated in a manner that is better suited for SQP by minimizing a dummy variable, z, and moving the performance at the uncertainty vertices to the constraints: min z �x,z (3.5) s.t. �g(�x) ≤ 0 hi (z,�x, �pi)) ≤ 0 ∀ i =1...npv where the augmented constraints, hi (z,�x, �pi), are defined as follows: hi (z,�x, �pi) =−z + f (�x, �pi) (3.6) By inspection, the gradients of the objective with respect to the tailoring, �x and dummy, z, variables are zero and one, respectively. The performance gradients are included through the augmented constraint gradient, instead of in the objective func- 84

tion: ∂hi (z,�x, �pi) ∂�x ∂hi (z,�x, �pi) ∂z = ∂f (�x, �pi) ∂�x (3.7) = −1 (3.8) In this alternate formulation (Equation 3.5) the optimization is a minimization with nonlinear constraints. Although the performance at each of the vertices is still re- quired at each iteration, it is no longer necessary to determine the worst-case vertex. The problem is set up such that the optimal cost must be at one of the constraint boundaries, and as a result the variable z is the worst-case performance. This robust design method is particularly well-suited for convex parametric un- certainty models. In their monograph [15], Ben-Haim and Elishakoff define convex models and discuss their application to problems in applied mechanics. The authors show that for most practical problems the uncertainty space is convex and therefore only the vertices of the space need be considered in robust design applications. This result is fortuitous as it allows a large reduction in the uncertainty set and guarantees robustness to all other uncertainty values within the bounds. 3.2.2 Multiple Model Multiple model is a robust design technique borrowed from the field of robust control. It is applied to control system design in order to obtain a controller that is stable for a range of parameter values [10, 47]. In order to achieve this goal the weighted average of the H2 norms of a discrete set of plants is minimized. The resulting solution is guaranteed to stabilize each of the plants in the set. The multiple model principle is readily applied to the robust performance tailoring problem since the output RMS value calculated with the Lyapunov expression is also an H2 norm. Instead of minimizing the nominal performance, as in the PT case, a weighted sum of the performances of a set of models within the uncertainty space is 85

formance. This worst-case performance is then the objective function for the outer<br />

tailoring optimization. In effect, anti-optimization is analogous to performance tai-<br />

loring for worst-case, instead of nominal, performance.<br />

In order to use a gradient-based optimization algorithm to solve the anti-optimization<br />

problem efficiently analytical gradients of the objective are required. These gradients<br />

are difficult to obtain given the form of Equation 3.4. As the tailoring parameters<br />

change, the worst-case uncertainty vector may move from one vertex to another caus-<br />

ing a discontinuity in the gradient. If the objective and constraints are linear, then<br />

the problem can be solved <strong>with</strong> a linear programming algorithm and the disconti-<br />

nuities do not cause a problem. However, if a quadratic approximation algorithm,<br />

such as SQP, is applied to a problem <strong>with</strong> nonlinear objectives and/or constraints,<br />

the discontinuity causes the optimization tomisbehaveandsearchinefficiently.<br />

The problem can be formulated in a manner that is better suited for SQP by<br />

minimizing a dummy variable, z, and moving the performance at the uncertainty<br />

vertices to the constraints:<br />

min z<br />

�x,z<br />

(3.5)<br />

s.t. �g(�x) ≤ 0<br />

hi (z,�x, �pi)) ≤ 0 ∀ i =1...npv<br />

where the augmented constraints, hi (z,�x, �pi), are defined as follows:<br />

hi (z,�x, �pi) =−z + f (�x, �pi) (3.6)<br />

By inspection, the gradients of the objective <strong>with</strong> respect to the tailoring, �x and<br />

dummy, z, variables are zero and one, respectively. The performance gradients are<br />

included through the augmented constraint gradient, instead of in the objective func-<br />

84

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!