Chapter 5 Robust Performance Tailoring with Tuning - SSL - MIT

Chapter 5 Robust Performance Tailoring with Tuning - SSL - MIT Chapter 5 Robust Performance Tailoring with Tuning - SSL - MIT

11.12.2012 Views

ence in cost is due to the fact that the standard deviation metric is less conservative in this implementation. Note that both optimization algorithms take a very long time to converge when Monte Carlo standard deviations are calculated. This jump in computational effort is due to the fact that to get a good estimate of standard devia- tion it is necessary to run a large number of uncertainty combinations. In the case of the SQP algorithm, it is important that the Monte Carlo uncertainty distribution is chosen before running the optimization so that the uncertainty space are consistent from one iteration to the next. Otherwise, the gradients given in Equation 3.14 do not track the design changes. The differences between the SQP and SA designs are similar to the other cases considered thus far in that SA finds a sub-optimal design, but provides a starting point for SQP that is very close to the optimal design. Although the Monte Carlo method provides a more accurate measure of the standard deviation, the results indicate that the increase in accuracy is not worth the additional computational effort required. The combination of SA and SQP optimization with the vertex method converges in 17 minutes compared to 151 minutes required for the Monte Carlo metric. Although the vertex method may not be an accurate measure of standard deviation for the uniform distribution, it does provide a conservative measure of robustness for bounded uncertainty models. In order to assess the effect of the weighting parameter, α, the statistical robust- ness algorithm is run over a range of relative weightings from 0.0 to1.0. The nominal and worst-case performance for the resulting designs are plotted in Figure 3-5 along with the standard deviation measure obtained from the vertex method. The nominal performance is depicted with circles and the standard deviation with stars. As α increases, the weight on the standard deviation decreases and that on nominal per- formance increases as evidenced by the figure. The nominal performance decreases as α increases while the standard deviation shows the opposite trend. At α =0.0, the nominal performance is not included in the cost at all and only the standard deviation is minimized. At the other extreme, when α =1.0, the standard deviation is elimi- nated from the cost function and the problem is equivalent to the PT optimization. 92

Performance [µm] 1400 1200 1000 800 600 400 200 Nominal Worst−Case Standard Deviation 0 0 0.2 0.4 0.6 0.8 1 Performance Weight, α Figure 3-5: Nominal performance (o), worst-case (�) performance and standard deviation (*) for vertex statistical robustness (VSR) RPT designs vs nominal performance weighting, α. The squares on the plot represent the worst-case performance, or the performance at the worst-case uncertainty vertex. As α increases, and the weight on robustness decreases, the worst-case performance increases nonlinearly. There is significant jump in the worst-case performance at α =0.8, and as the weighting approaches α =1.0 the curves plateau to the performance of the PT design. 3.3.2 Objective function comparisons In the previous section the different implementations of optimization with the three RPT cost functions are compared for computational efficiency and performance. The combination of SA and SQP algorithms consistently achieves a lower cost value in less time than MC SQP. In this section the SQP RPT designs are compared against each other and the PT design for robustness. The nominal and worst-case performance values for the PT and RPT designs are plotted in Figure 3-6(a), and the values are listed in the accompanying table 93

ence in cost is due to the fact that the standard deviation metric is less conservative<br />

in this implementation. Note that both optimization algorithms take a very long<br />

time to converge when Monte Carlo standard deviations are calculated. This jump in<br />

computational effort is due to the fact that to get a good estimate of standard devia-<br />

tion it is necessary to run a large number of uncertainty combinations. In the case of<br />

the SQP algorithm, it is important that the Monte Carlo uncertainty distribution is<br />

chosen before running the optimization so that the uncertainty space are consistent<br />

from one iteration to the next. Otherwise, the gradients given in Equation 3.14 do<br />

not track the design changes.<br />

The differences between the SQP and SA designs are similar to the other cases<br />

considered thus far in that SA finds a sub-optimal design, but provides a starting point<br />

for SQP that is very close to the optimal design. Although the Monte Carlo method<br />

provides a more accurate measure of the standard deviation, the results indicate that<br />

the increase in accuracy is not worth the additional computational effort required.<br />

The combination of SA and SQP optimization <strong>with</strong> the vertex method converges in<br />

17 minutes compared to 151 minutes required for the Monte Carlo metric. Although<br />

the vertex method may not be an accurate measure of standard deviation for the<br />

uniform distribution, it does provide a conservative measure of robustness for bounded<br />

uncertainty models.<br />

In order to assess the effect of the weighting parameter, α, the statistical robust-<br />

ness algorithm is run over a range of relative weightings from 0.0 to1.0. The nominal<br />

and worst-case performance for the resulting designs are plotted in Figure 3-5 along<br />

<strong>with</strong> the standard deviation measure obtained from the vertex method. The nominal<br />

performance is depicted <strong>with</strong> circles and the standard deviation <strong>with</strong> stars. As α<br />

increases, the weight on the standard deviation decreases and that on nominal per-<br />

formance increases as evidenced by the figure. The nominal performance decreases as<br />

α increases while the standard deviation shows the opposite trend. At α =0.0, the<br />

nominal performance is not included in the cost at all and only the standard deviation<br />

is minimized. At the other extreme, when α =1.0, the standard deviation is elimi-<br />

nated from the cost function and the problem is equivalent to the PT optimization.<br />

92

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!