Chapter 5 Robust Performance Tailoring with Tuning - SSL - MIT
Chapter 5 Robust Performance Tailoring with Tuning - SSL - MIT Chapter 5 Robust Performance Tailoring with Tuning - SSL - MIT
ence in cost is due to the fact that the standard deviation metric is less conservative in this implementation. Note that both optimization algorithms take a very long time to converge when Monte Carlo standard deviations are calculated. This jump in computational effort is due to the fact that to get a good estimate of standard devia- tion it is necessary to run a large number of uncertainty combinations. In the case of the SQP algorithm, it is important that the Monte Carlo uncertainty distribution is chosen before running the optimization so that the uncertainty space are consistent from one iteration to the next. Otherwise, the gradients given in Equation 3.14 do not track the design changes. The differences between the SQP and SA designs are similar to the other cases considered thus far in that SA finds a sub-optimal design, but provides a starting point for SQP that is very close to the optimal design. Although the Monte Carlo method provides a more accurate measure of the standard deviation, the results indicate that the increase in accuracy is not worth the additional computational effort required. The combination of SA and SQP optimization with the vertex method converges in 17 minutes compared to 151 minutes required for the Monte Carlo metric. Although the vertex method may not be an accurate measure of standard deviation for the uniform distribution, it does provide a conservative measure of robustness for bounded uncertainty models. In order to assess the effect of the weighting parameter, α, the statistical robust- ness algorithm is run over a range of relative weightings from 0.0 to1.0. The nominal and worst-case performance for the resulting designs are plotted in Figure 3-5 along with the standard deviation measure obtained from the vertex method. The nominal performance is depicted with circles and the standard deviation with stars. As α increases, the weight on the standard deviation decreases and that on nominal per- formance increases as evidenced by the figure. The nominal performance decreases as α increases while the standard deviation shows the opposite trend. At α =0.0, the nominal performance is not included in the cost at all and only the standard deviation is minimized. At the other extreme, when α =1.0, the standard deviation is elimi- nated from the cost function and the problem is equivalent to the PT optimization. 92
Performance [µm] 1400 1200 1000 800 600 400 200 Nominal Worst−Case Standard Deviation 0 0 0.2 0.4 0.6 0.8 1 Performance Weight, α Figure 3-5: Nominal performance (o), worst-case (�) performance and standard deviation (*) for vertex statistical robustness (VSR) RPT designs vs nominal performance weighting, α. The squares on the plot represent the worst-case performance, or the performance at the worst-case uncertainty vertex. As α increases, and the weight on robustness decreases, the worst-case performance increases nonlinearly. There is significant jump in the worst-case performance at α =0.8, and as the weighting approaches α =1.0 the curves plateau to the performance of the PT design. 3.3.2 Objective function comparisons In the previous section the different implementations of optimization with the three RPT cost functions are compared for computational efficiency and performance. The combination of SA and SQP algorithms consistently achieves a lower cost value in less time than MC SQP. In this section the SQP RPT designs are compared against each other and the PT design for robustness. The nominal and worst-case performance values for the PT and RPT designs are plotted in Figure 3-6(a), and the values are listed in the accompanying table 93
- Page 41 and 42: ometer (SCI). In the following sect
- Page 43 and 44: The equations of motion of the unda
- Page 45 and 46: The frequency response functions fr
- Page 47 and 48: the output covariance matrix, Σz,
- Page 49 and 50: where the subscript indicates the i
- Page 51 and 52: 2.3.3 Design Variables The choice o
- Page 53 and 54: and then, by inspection, the inerti
- Page 55 and 56: algorithms begin at an initial gues
- Page 57 and 58: at least locally optimal, and the s
- Page 59 and 60: initial design variable state, x =
- Page 61 and 62: and the RMS OPD is computed using E
- Page 63 and 64: # Designs 25 20 15 10 5 Accepted, b
- Page 65 and 66: does not provide information on why
- Page 67 and 68: energy is distributed almost evenly
- Page 69 and 70: also symmetric as seen in the figur
- Page 71 and 72: Chapter 3 Robust Performance Tailor
- Page 73 and 74: through careful and experienced mod
- Page 75 and 76: described above. However, one can r
- Page 77 and 78: ic, σz(�x, �p), that is depend
- Page 79 and 80: Magnitude, OPD/F x [µm/N] Magnitud
- Page 81 and 82: % Energy 100 90 80 70 60 50 40 30 2
- Page 83 and 84: metric to the cost function. Note,
- Page 85 and 86: tion: ∂hi (z,�x, �pi) ∂�x
- Page 87 and 88: values are chosen from their statis
- Page 89 and 90: Table 3.3: Algorithm performance: a
- Page 91: Statistical Robustness The statisti
- Page 95 and 96: (Figure 3-6(b)). The nominal perfor
- Page 97 and 98: Norm. Cum. Var. [µm 2 ] PSD [µm 2
- Page 99 and 100: energy by mode for easy comparison.
- Page 101 and 102: Y−coordinate [m] Y−coordinate [
- Page 103 and 104: RMS performance, [µm] 400 350 300
- Page 105 and 106: The requirement chosen here is some
- Page 107 and 108: Chapter 4 Dynamic Tuning Robust Per
- Page 109 and 110: on a physical truss. Since tailorin
- Page 111 and 112: Table 4.1: Tuning parameters for SC
- Page 113 and 114: m 2 [kg] J ∗ # # time y ∗ [kg]
- Page 115 and 116: m 2 [kg] 800 700 600 500 400 300 20
- Page 117 and 118: configuration than the untuned, but
- Page 119 and 120: Norm. Cum. Var. [µm 2 ] PSD [µm 2
- Page 121 and 122: Performance Requirement [µm] 400 3
- Page 123 and 124: is considered. 4.2.1 Hardware-only
- Page 125 and 126: and added to the objective function
- Page 127 and 128: using either a decreasing step-size
- Page 129 and 130: for tailoring, but tuning parameter
- Page 131 and 132: tained by randomly choosing paramet
- Page 133 and 134: p [GPa] y ∗ [kg] Performance [µm
- Page 135 and 136: # Func. Evals Performance RMS (µm)
- Page 137 and 138: tion changes in the updated solutio
- Page 139 and 140: Data: initial iterate, p0, performa
- Page 141 and 142: the new tuning configuration is ver
ence in cost is due to the fact that the standard deviation metric is less conservative<br />
in this implementation. Note that both optimization algorithms take a very long<br />
time to converge when Monte Carlo standard deviations are calculated. This jump in<br />
computational effort is due to the fact that to get a good estimate of standard devia-<br />
tion it is necessary to run a large number of uncertainty combinations. In the case of<br />
the SQP algorithm, it is important that the Monte Carlo uncertainty distribution is<br />
chosen before running the optimization so that the uncertainty space are consistent<br />
from one iteration to the next. Otherwise, the gradients given in Equation 3.14 do<br />
not track the design changes.<br />
The differences between the SQP and SA designs are similar to the other cases<br />
considered thus far in that SA finds a sub-optimal design, but provides a starting point<br />
for SQP that is very close to the optimal design. Although the Monte Carlo method<br />
provides a more accurate measure of the standard deviation, the results indicate that<br />
the increase in accuracy is not worth the additional computational effort required.<br />
The combination of SA and SQP optimization <strong>with</strong> the vertex method converges in<br />
17 minutes compared to 151 minutes required for the Monte Carlo metric. Although<br />
the vertex method may not be an accurate measure of standard deviation for the<br />
uniform distribution, it does provide a conservative measure of robustness for bounded<br />
uncertainty models.<br />
In order to assess the effect of the weighting parameter, α, the statistical robust-<br />
ness algorithm is run over a range of relative weightings from 0.0 to1.0. The nominal<br />
and worst-case performance for the resulting designs are plotted in Figure 3-5 along<br />
<strong>with</strong> the standard deviation measure obtained from the vertex method. The nominal<br />
performance is depicted <strong>with</strong> circles and the standard deviation <strong>with</strong> stars. As α<br />
increases, the weight on the standard deviation decreases and that on nominal per-<br />
formance increases as evidenced by the figure. The nominal performance decreases as<br />
α increases while the standard deviation shows the opposite trend. At α =0.0, the<br />
nominal performance is not included in the cost at all and only the standard deviation<br />
is minimized. At the other extreme, when α =1.0, the standard deviation is elimi-<br />
nated from the cost function and the problem is equivalent to the PT optimization.<br />
92