Chapter 5 Robust Performance Tailoring with Tuning - SSL - MIT
Chapter 5 Robust Performance Tailoring with Tuning - SSL - MIT Chapter 5 Robust Performance Tailoring with Tuning - SSL - MIT
formance. This worst-case performance is then the objective function for the outer tailoring optimization. In effect, anti-optimization is analogous to performance tai- loring for worst-case, instead of nominal, performance. In order to use a gradient-based optimization algorithm to solve the anti-optimization problem efficiently analytical gradients of the objective are required. These gradients are difficult to obtain given the form of Equation 3.4. As the tailoring parameters change, the worst-case uncertainty vector may move from one vertex to another caus- ing a discontinuity in the gradient. If the objective and constraints are linear, then the problem can be solved with a linear programming algorithm and the disconti- nuities do not cause a problem. However, if a quadratic approximation algorithm, such as SQP, is applied to a problem with nonlinear objectives and/or constraints, the discontinuity causes the optimization tomisbehaveandsearchinefficiently. The problem can be formulated in a manner that is better suited for SQP by minimizing a dummy variable, z, and moving the performance at the uncertainty vertices to the constraints: min z �x,z (3.5) s.t. �g(�x) ≤ 0 hi (z,�x, �pi)) ≤ 0 ∀ i =1...npv where the augmented constraints, hi (z,�x, �pi), are defined as follows: hi (z,�x, �pi) =−z + f (�x, �pi) (3.6) By inspection, the gradients of the objective with respect to the tailoring, �x and dummy, z, variables are zero and one, respectively. The performance gradients are included through the augmented constraint gradient, instead of in the objective func- 84
tion: ∂hi (z,�x, �pi) ∂�x ∂hi (z,�x, �pi) ∂z = ∂f (�x, �pi) ∂�x (3.7) = −1 (3.8) In this alternate formulation (Equation 3.5) the optimization is a minimization with nonlinear constraints. Although the performance at each of the vertices is still re- quired at each iteration, it is no longer necessary to determine the worst-case vertex. The problem is set up such that the optimal cost must be at one of the constraint boundaries, and as a result the variable z is the worst-case performance. This robust design method is particularly well-suited for convex parametric un- certainty models. In their monograph [15], Ben-Haim and Elishakoff define convex models and discuss their application to problems in applied mechanics. The authors show that for most practical problems the uncertainty space is convex and therefore only the vertices of the space need be considered in robust design applications. This result is fortuitous as it allows a large reduction in the uncertainty set and guarantees robustness to all other uncertainty values within the bounds. 3.2.2 Multiple Model Multiple model is a robust design technique borrowed from the field of robust control. It is applied to control system design in order to obtain a controller that is stable for a range of parameter values [10, 47]. In order to achieve this goal the weighted average of the H2 norms of a discrete set of plants is minimized. The resulting solution is guaranteed to stabilize each of the plants in the set. The multiple model principle is readily applied to the robust performance tailoring problem since the output RMS value calculated with the Lyapunov expression is also an H2 norm. Instead of minimizing the nominal performance, as in the PT case, a weighted sum of the performances of a set of models within the uncertainty space is 85
- Page 34 and 35: attractive, and more conservative a
- Page 36 and 37: to solve the performance tailoring
- Page 39 and 40: Chapter 2 Performance Tailoring A c
- Page 41 and 42: ometer (SCI). In the following sect
- Page 43 and 44: The equations of motion of the unda
- Page 45 and 46: The frequency response functions fr
- Page 47 and 48: the output covariance matrix, Σz,
- Page 49 and 50: where the subscript indicates the i
- Page 51 and 52: 2.3.3 Design Variables The choice o
- Page 53 and 54: and then, by inspection, the inerti
- Page 55 and 56: algorithms begin at an initial gues
- Page 57 and 58: at least locally optimal, and the s
- Page 59 and 60: initial design variable state, x =
- Page 61 and 62: and the RMS OPD is computed using E
- Page 63 and 64: # Designs 25 20 15 10 5 Accepted, b
- Page 65 and 66: does not provide information on why
- Page 67 and 68: energy is distributed almost evenly
- Page 69 and 70: also symmetric as seen in the figur
- Page 71 and 72: Chapter 3 Robust Performance Tailor
- Page 73 and 74: through careful and experienced mod
- Page 75 and 76: described above. However, one can r
- Page 77 and 78: ic, σz(�x, �p), that is depend
- Page 79 and 80: Magnitude, OPD/F x [µm/N] Magnitud
- Page 81 and 82: % Energy 100 90 80 70 60 50 40 30 2
- Page 83: metric to the cost function. Note,
- Page 87 and 88: values are chosen from their statis
- Page 89 and 90: Table 3.3: Algorithm performance: a
- Page 91 and 92: Statistical Robustness The statisti
- Page 93 and 94: Performance [µm] 1400 1200 1000 80
- Page 95 and 96: (Figure 3-6(b)). The nominal perfor
- Page 97 and 98: Norm. Cum. Var. [µm 2 ] PSD [µm 2
- Page 99 and 100: energy by mode for easy comparison.
- Page 101 and 102: Y−coordinate [m] Y−coordinate [
- Page 103 and 104: RMS performance, [µm] 400 350 300
- Page 105 and 106: The requirement chosen here is some
- Page 107 and 108: Chapter 4 Dynamic Tuning Robust Per
- Page 109 and 110: on a physical truss. Since tailorin
- Page 111 and 112: Table 4.1: Tuning parameters for SC
- Page 113 and 114: m 2 [kg] J ∗ # # time y ∗ [kg]
- Page 115 and 116: m 2 [kg] 800 700 600 500 400 300 20
- Page 117 and 118: configuration than the untuned, but
- Page 119 and 120: Norm. Cum. Var. [µm 2 ] PSD [µm 2
- Page 121 and 122: Performance Requirement [µm] 400 3
- Page 123 and 124: is considered. 4.2.1 Hardware-only
- Page 125 and 126: and added to the objective function
- Page 127 and 128: using either a decreasing step-size
- Page 129 and 130: for tailoring, but tuning parameter
- Page 131 and 132: tained by randomly choosing paramet
- Page 133 and 134: p [GPa] y ∗ [kg] Performance [µm
formance. This worst-case performance is then the objective function for the outer<br />
tailoring optimization. In effect, anti-optimization is analogous to performance tai-<br />
loring for worst-case, instead of nominal, performance.<br />
In order to use a gradient-based optimization algorithm to solve the anti-optimization<br />
problem efficiently analytical gradients of the objective are required. These gradients<br />
are difficult to obtain given the form of Equation 3.4. As the tailoring parameters<br />
change, the worst-case uncertainty vector may move from one vertex to another caus-<br />
ing a discontinuity in the gradient. If the objective and constraints are linear, then<br />
the problem can be solved <strong>with</strong> a linear programming algorithm and the disconti-<br />
nuities do not cause a problem. However, if a quadratic approximation algorithm,<br />
such as SQP, is applied to a problem <strong>with</strong> nonlinear objectives and/or constraints,<br />
the discontinuity causes the optimization tomisbehaveandsearchinefficiently.<br />
The problem can be formulated in a manner that is better suited for SQP by<br />
minimizing a dummy variable, z, and moving the performance at the uncertainty<br />
vertices to the constraints:<br />
min z<br />
�x,z<br />
(3.5)<br />
s.t. �g(�x) ≤ 0<br />
hi (z,�x, �pi)) ≤ 0 ∀ i =1...npv<br />
where the augmented constraints, hi (z,�x, �pi), are defined as follows:<br />
hi (z,�x, �pi) =−z + f (�x, �pi) (3.6)<br />
By inspection, the gradients of the objective <strong>with</strong> respect to the tailoring, �x and<br />
dummy, z, variables are zero and one, respectively. The performance gradients are<br />
included through the augmented constraint gradient, instead of in the objective func-<br />
84