Chapter 5 Robust Performance Tailoring with Tuning - SSL - MIT

Chapter 5 Robust Performance Tailoring with Tuning - SSL - MIT Chapter 5 Robust Performance Tailoring with Tuning - SSL - MIT

11.12.2012 Views

maybethatitbecomescertain that the system will fail. Design changes at this stage are quite expensive now that flight hardware is built. Space-based, structurally-connected interferometers are particularly affected by this trade since they are classified as both high-performance and high-risk systems. The precision optical performance required for astrometry, nulling and imaging cou- pled with the size and flexibility of the instrument place heavy demand on the struc- tural dynamics and control systems, while the high cost of a fully-integrated system test limits the ability to guarantee desired on-orbit performance prior to launch. As aresult,it is necessary to design the system very precisely, yet rely heavily on models and simulations, which are approximations, to predict performance. 1.2.1 Background One approach to the design of these systems is shown in Figure 1-3. The figure is broken up into three different regions. In Region I, testbeds are used to validate modeling techniques and generate model uncertainty factors (MUFs) [18]. Testbed models are developed and performance predictions from these models are compared to data from the testbeds. The models are refined until all of the major features visible in the testbed data are captured in the model. Then MUFs are chosen to approximate any remaining differences between the model and the data that are difficult to quantify. Model predictions that have been adjusted by MUFs should be conservative when compared to the testbed data. In Region II, the component models are used to predict performance and drive system design. The component developers deliver models of their respective designs. The MUFs are applied to the component models and they are integrated to evalu- ate system performance. The component designs and associated models are iterated upon until the predicted system performance meets requirements. Once the designs are validated in this way, the developers build and deliver the flight system compo- nents. Upon delivery, the components are tested and compared with the conservative component models before acceptance. If the test data lies within the model predic- tions the models are considered validated, and the components are accepted. 24

I II III Generate Testbed Model Compare to Testbed Data Are general features captured? yes Derive MUFs no Generate Flight System Models (with MUFs) Build and test flight system components Do results lie within predictions? yes no On-orbit system simulation Are performance requirements met? yes Launch no Redesign Figure 1-3: Current model validation and performance assessment approach. In Region III, the test data from the component hardware is combined with an on- orbit simulation to predict system performance in the operational environment. The predictions are compared to the limited system validation test data that is available as well as to the requirements. Component interface uncertainty becomes relevant in this step since a blend of models and data are used in the simulation. If the simulation prediction meets requirements and the validation tests match predictions, the system is launched. If the simulation does not meet requirements, launch is delayed to allow for redesign and adjustments. Four mission scenarios that could arise based on the process described above are listed in Table 1.1. In the first scenario, the simulation predictions meet performance, the system is launched and the on-orbit performance matches the predictions resulting in a successful mission. In the second scenario, the simulation predicts adequate per- formance, but the predictions are incorrect and on-orbit performance is not adequate leading to mission failure. In the third scenario, the predictions are incorrect again, but this time the simulation predicts poor performance while the on-orbit behavior would have been adequate, and the result is an unnecessary delay in launch. Finally, 25

I II III<br />

Generate Testbed<br />

Model<br />

Compare to<br />

Testbed Data<br />

Are general<br />

features<br />

captured?<br />

yes<br />

Derive MUFs<br />

no<br />

Generate Flight<br />

System Models<br />

(<strong>with</strong> MUFs)<br />

Build and test flight<br />

system components<br />

Do results lie<br />

<strong>with</strong>in<br />

predictions?<br />

yes<br />

no<br />

On-orbit system<br />

simulation<br />

Are performance<br />

requirements<br />

met?<br />

yes<br />

Launch<br />

no<br />

Redesign<br />

Figure 1-3: Current model validation and performance assessment approach.<br />

In Region III, the test data from the component hardware is combined <strong>with</strong> an on-<br />

orbit simulation to predict system performance in the operational environment. The<br />

predictions are compared to the limited system validation test data that is available<br />

as well as to the requirements. Component interface uncertainty becomes relevant in<br />

this step since a blend of models and data are used in the simulation. If the simulation<br />

prediction meets requirements and the validation tests match predictions, the system<br />

is launched. If the simulation does not meet requirements, launch is delayed to allow<br />

for redesign and adjustments.<br />

Four mission scenarios that could arise based on the process described above are<br />

listed in Table 1.1. In the first scenario, the simulation predictions meet performance,<br />

the system is launched and the on-orbit performance matches the predictions resulting<br />

in a successful mission. In the second scenario, the simulation predicts adequate per-<br />

formance, but the predictions are incorrect and on-orbit performance is not adequate<br />

leading to mission failure. In the third scenario, the predictions are incorrect again,<br />

but this time the simulation predicts poor performance while the on-orbit behavior<br />

would have been adequate, and the result is an unnecessary delay in launch. Finally,<br />

25

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!