Chapter 5 Robust Performance Tailoring with Tuning - SSL - MIT
Chapter 5 Robust Performance Tailoring with Tuning - SSL - MIT Chapter 5 Robust Performance Tailoring with Tuning - SSL - MIT
maybethatitbecomescertain that the system will fail. Design changes at this stage are quite expensive now that flight hardware is built. Space-based, structurally-connected interferometers are particularly affected by this trade since they are classified as both high-performance and high-risk systems. The precision optical performance required for astrometry, nulling and imaging cou- pled with the size and flexibility of the instrument place heavy demand on the struc- tural dynamics and control systems, while the high cost of a fully-integrated system test limits the ability to guarantee desired on-orbit performance prior to launch. As aresult,it is necessary to design the system very precisely, yet rely heavily on models and simulations, which are approximations, to predict performance. 1.2.1 Background One approach to the design of these systems is shown in Figure 1-3. The figure is broken up into three different regions. In Region I, testbeds are used to validate modeling techniques and generate model uncertainty factors (MUFs) [18]. Testbed models are developed and performance predictions from these models are compared to data from the testbeds. The models are refined until all of the major features visible in the testbed data are captured in the model. Then MUFs are chosen to approximate any remaining differences between the model and the data that are difficult to quantify. Model predictions that have been adjusted by MUFs should be conservative when compared to the testbed data. In Region II, the component models are used to predict performance and drive system design. The component developers deliver models of their respective designs. The MUFs are applied to the component models and they are integrated to evalu- ate system performance. The component designs and associated models are iterated upon until the predicted system performance meets requirements. Once the designs are validated in this way, the developers build and deliver the flight system compo- nents. Upon delivery, the components are tested and compared with the conservative component models before acceptance. If the test data lies within the model predic- tions the models are considered validated, and the components are accepted. 24
I II III Generate Testbed Model Compare to Testbed Data Are general features captured? yes Derive MUFs no Generate Flight System Models (with MUFs) Build and test flight system components Do results lie within predictions? yes no On-orbit system simulation Are performance requirements met? yes Launch no Redesign Figure 1-3: Current model validation and performance assessment approach. In Region III, the test data from the component hardware is combined with an on- orbit simulation to predict system performance in the operational environment. The predictions are compared to the limited system validation test data that is available as well as to the requirements. Component interface uncertainty becomes relevant in this step since a blend of models and data are used in the simulation. If the simulation prediction meets requirements and the validation tests match predictions, the system is launched. If the simulation does not meet requirements, launch is delayed to allow for redesign and adjustments. Four mission scenarios that could arise based on the process described above are listed in Table 1.1. In the first scenario, the simulation predictions meet performance, the system is launched and the on-orbit performance matches the predictions resulting in a successful mission. In the second scenario, the simulation predicts adequate per- formance, but the predictions are incorrect and on-orbit performance is not adequate leading to mission failure. In the third scenario, the predictions are incorrect again, but this time the simulation predicts poor performance while the on-orbit behavior would have been adequate, and the result is an unnecessary delay in launch. Finally, 25
- Page 1: Dynamic Tailoring and Tuning for Sp
- Page 4 and 5: Acknowledgments This work was suppo
- Page 6 and 7: 3.2 RPT Formulation . . . . . . . .
- Page 9 and 10: List of Figures 1-1 Timeline of Ori
- Page 11 and 12: List of Tables 1.1 Effect of simula
- Page 13 and 14: Nomenclature Abbreviations ACS atti
- Page 15: dk optimization search direction f
- Page 18 and 19: 1.1 Space-Based Interferometry NASA
- Page 20 and 21: unfettered by the Earth’s atmosph
- Page 22 and 23: the SCI, both the size and flexibil
- Page 26 and 27: Table 1.1: Effect of simulation res
- Page 28 and 29: Table 1.2: Effect of simulation res
- Page 30 and 31: has been found that structural desi
- Page 32 and 33: precision telescope structure for m
- Page 34 and 35: attractive, and more conservative a
- Page 36 and 37: to solve the performance tailoring
- Page 39 and 40: Chapter 2 Performance Tailoring A c
- Page 41 and 42: ometer (SCI). In the following sect
- Page 43 and 44: The equations of motion of the unda
- Page 45 and 46: The frequency response functions fr
- Page 47 and 48: the output covariance matrix, Σz,
- Page 49 and 50: where the subscript indicates the i
- Page 51 and 52: 2.3.3 Design Variables The choice o
- Page 53 and 54: and then, by inspection, the inerti
- Page 55 and 56: algorithms begin at an initial gues
- Page 57 and 58: at least locally optimal, and the s
- Page 59 and 60: initial design variable state, x =
- Page 61 and 62: and the RMS OPD is computed using E
- Page 63 and 64: # Designs 25 20 15 10 5 Accepted, b
- Page 65 and 66: does not provide information on why
- Page 67 and 68: energy is distributed almost evenly
- Page 69 and 70: also symmetric as seen in the figur
- Page 71 and 72: Chapter 3 Robust Performance Tailor
- Page 73 and 74: through careful and experienced mod
I II III<br />
Generate Testbed<br />
Model<br />
Compare to<br />
Testbed Data<br />
Are general<br />
features<br />
captured?<br />
yes<br />
Derive MUFs<br />
no<br />
Generate Flight<br />
System Models<br />
(<strong>with</strong> MUFs)<br />
Build and test flight<br />
system components<br />
Do results lie<br />
<strong>with</strong>in<br />
predictions?<br />
yes<br />
no<br />
On-orbit system<br />
simulation<br />
Are performance<br />
requirements<br />
met?<br />
yes<br />
Launch<br />
no<br />
Redesign<br />
Figure 1-3: Current model validation and performance assessment approach.<br />
In Region III, the test data from the component hardware is combined <strong>with</strong> an on-<br />
orbit simulation to predict system performance in the operational environment. The<br />
predictions are compared to the limited system validation test data that is available<br />
as well as to the requirements. Component interface uncertainty becomes relevant in<br />
this step since a blend of models and data are used in the simulation. If the simulation<br />
prediction meets requirements and the validation tests match predictions, the system<br />
is launched. If the simulation does not meet requirements, launch is delayed to allow<br />
for redesign and adjustments.<br />
Four mission scenarios that could arise based on the process described above are<br />
listed in Table 1.1. In the first scenario, the simulation predictions meet performance,<br />
the system is launched and the on-orbit performance matches the predictions resulting<br />
in a successful mission. In the second scenario, the simulation predicts adequate per-<br />
formance, but the predictions are incorrect and on-orbit performance is not adequate<br />
leading to mission failure. In the third scenario, the predictions are incorrect again,<br />
but this time the simulation predicts poor performance while the on-orbit behavior<br />
would have been adequate, and the result is an unnecessary delay in launch. Finally,<br />
25