ETTC'2003 - SEE

ETTC'2003 - SEE ETTC'2003 - SEE

web1.see.asso.fr
from web1.see.asso.fr More from this publisher
21.04.2013 Views

predict performance in an incorrect situation or environment. Therefore a mutual trust between simulation and tests is fundamental. VV&A and VV&C The definitions given in the following paragraphs are taken from the M&S Master Plan, published by the US DoD in October 1995, and from the NATO M&S Master Plan AC/323(SGMS)D/2, published in August 1998. They are widely accepted by the international simulation community. An efficient evaluation strategy should include a verification and validation (V&V) of models and simulation, following the increasing maturity of the system, in order to establish credible information. Following these definition, verification is the process that determines whether a model or a simulation represents precisely and faithfully the conceptual description and the development specification. This process includes also a verification of the software engineering techniques used. To make a long story short, the aim is to prove that what has been done corresponds to what was required and that is was done correctly. Validation is the process that determines the adequacy of the model or the simulation to the real-world from the point of view of the intended use of the M&S. In other words, the issue is to evaluate the way in which a response was brought to the initial problem. Based on an adequate V&B, the model or the simulation is determined as acceptable for an intended use within a specific application framework. This constitutes the accreditation. It should be noticed that all data used by M&S have to certified also by a process called VV&C (verification, validation and certification). Supplementary definitions follow. Data verification, from the data producer point of view, consists in the use of techniques and procedures that guarantee that the data satisfy constraints defined by the data standards. From the data user point of view, verification consists in the use of techniques and procedures that guarantee that the data follow standards and usual rules. Data validation consists in the documented evaluation by domain experts and by comparison with common reference values. From the data producer point of view, this evaluation has to be done following explicit criteria and assumptions. From the user point of view, the evaluation is done with respect to the intended use within a particular model. The VV&C process checks the internal consistency and the correction of the data, validates the fact that they represent entities of the real-world in conformity with the intended use, and certifies that the data have the specified quality level that corresponds to the intended use. Here also, this process has two perspectives as before, depending on the data producer and the data user. Another problem is that data can be available under two forms. They can be unprocessed, coming from tests, available literature, intelligence…, and their credibility depends then on the collection process and the intrinsic credibility of the used sources of information. But then also be aggregated, i.e. generated by the use of various processing on unprocessed data. It is then necessary to have a closer examination of the adequacy of the data with the models and their intended use. The main points to remember is that data coming from laboratory or field tests will be integrated in order to validate models and simulations, as the system matures. This leads to a complete set of M&S which gains fidelity (this is the verification aspect) and credibility (this is the validation aspect). It is essential to remember that any validation is done with an intended use in mind. This has been underlined from the start on in the various simulation

eference books, but it is as fundamental for test data or for any test. Henceforth it is necessary to document all test conditions, as well as all decisions that led to freeze this or that freedom degree within the scenario or the environment. This ultimate remark is important, as the VV&A and VV&C processes are very often described are costly, or as a necessary evil that generates initial overhead costs which lead to cost-efficiency only through subsequent reuse. However the same argument should be used stricto sensu for tests and test data, which is rarely underlined, all the more since in the past decades with the high budgets it was possible to conduct almost any desired test: we have today lots of test data that are totally useless as they cannot be reused in a coherent manner. As a first idea of the induced costs, we refer to the report “prefeasibility study on simulation based design and virtual prototyping”, published by the NATO Industrial Advisory Group in September 2000, reference NIAG-D(2000)9 AC/141(NG-6)D/25 : the VV&A cost is estimated as 15% of the global acquisition cost, and between 2 and 6% for a reused simulation. Back to verification and validation, it is difficult to evaluate a priori the necessary validation level, even if it is possible to define the necessary effort to reach a given credibility level. This comes from the fact that validation is performed in relation with an intended use of the product (data, model or simulation), and this cannot be made explicit from the beginning of the life cycle. Such a difficulty should not be seen as a tremendous obstacle, but as the necessity to conduct VV&A and VV&C within an iterative an incremental approach, throughout the whole system lifecycle. Technologies : solutions, constraints and necessary effort Virtual proving grounds, with a tight coupling between simulations and test data within virtual and digital synthetic environments, need to follow a few constraints in order to be viable. There is a need for interoperability between all simulations, even between systems which rely heavily on software, like C4ISR systems. This will be always more frequent with the development of systems-of-systems and network-centric warfare or cooperative engagement issues. It is not reasonable to develop every time specific interfaces, as a trivial cardinality analysis shows: their number is one order of magnitude larger that the number of incriminated systems, and an iterative evolution of the system implies thus an evolution of all interfaces, i.e. the order of magnitude of all considered connected systems. The (only) cost-efficient way is to adopt reference architectures at the various levels, what is usually known as meta-models from which the conceptual models of the systems can be easily established. To follow these reference architectures is indeed a constraint, but much smaller than the savings it generates, since the evolution of a system needs a priori only the modification of the interface with the reference model. Such reference architectures exist: HLA for simulations, others are under development for C4ISR systems, or are foreseen for robotic systems (cf. JAUGS, “Joint Architecture for Unmanned Ground Systems”, promoted by the US Army)… In a similar fashion, at data level, exchange standards are essential. As there does not exist a priori a unique format (SEDRIS should be a major format in the coming years), there is a need for meta-data standards. This is under study, for instance by the OMG, the “ object management group”. A quick review of the main efforts that should be investigated in order to optimize the collaboration between tests and simulations, one finds:

predict performance in an incorrect situation or environment. Therefore a mutual trust<br />

between simulation and tests is fundamental.<br />

VV&A and VV&C<br />

The definitions given in the following paragraphs are taken from the M&S Master<br />

Plan, published by the US DoD in October 1995, and from the NATO M&S Master Plan<br />

AC/323(SGMS)D/2, published in August 1998. They are widely accepted by the international<br />

simulation community.<br />

An efficient evaluation strategy should include a verification and validation (V&V) of<br />

models and simulation, following the increasing maturity of the system, in order to establish<br />

credible information.<br />

Following these definition, verification is the process that determines whether a model<br />

or a simulation represents precisely and faithfully the conceptual description and the<br />

development specification. This process includes also a verification of the software<br />

engineering techniques used. To make a long story short, the aim is to prove that what has<br />

been done corresponds to what was required and that is was done correctly.<br />

Validation is the process that determines the adequacy of the model or the simulation<br />

to the real-world from the point of view of the intended use of the M&S. In other words, the<br />

issue is to evaluate the way in which a response was brought to the initial problem.<br />

Based on an adequate V&B, the model or the simulation is determined as acceptable<br />

for an intended use within a specific application framework. This constitutes the accreditation.<br />

It should be noticed that all data used by M&S have to certified also by a process called<br />

VV&C (verification, validation and certification). Supplementary definitions follow.<br />

Data verification, from the data producer point of view, consists in the use of<br />

techniques and procedures that guarantee that the data satisfy constraints defined by the data<br />

standards. From the data user point of view, verification consists in the use of techniques and<br />

procedures that guarantee that the data follow standards and usual rules.<br />

Data validation consists in the documented evaluation by domain experts and by<br />

comparison with common reference values. From the data producer point of view, this<br />

evaluation has to be done following explicit criteria and assumptions. From the user point of<br />

view, the evaluation is done with respect to the intended use within a particular model.<br />

The VV&C process checks the internal consistency and the correction of the data,<br />

validates the fact that they represent entities of the real-world in conformity with the intended<br />

use, and certifies that the data have the specified quality level that corresponds to the intended<br />

use. Here also, this process has two perspectives as before, depending on the data producer<br />

and the data user.<br />

Another problem is that data can be available under two forms. They can be<br />

unprocessed, coming from tests, available literature, intelligence…, and their credibility<br />

depends then on the collection process and the intrinsic credibility of the used sources of<br />

information. But then also be aggregated, i.e. generated by the use of various processing on<br />

unprocessed data. It is then necessary to have a closer examination of the adequacy of the data<br />

with the models and their intended use.<br />

The main points to remember is that data coming from laboratory or field tests will be<br />

integrated in order to validate models and simulations, as the system matures. This leads to a<br />

complete set of M&S which gains fidelity (this is the verification aspect) and credibility (this<br />

is the validation aspect). It is essential to remember that any validation is done with an<br />

intended use in mind. This has been underlined from the start on in the various simulation

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!