10.12.2012 Views

Euradwaste '08 - EU Bookshop - Europa

Euradwaste '08 - EU Bookshop - Europa

Euradwaste '08 - EU Bookshop - Europa

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

teria to divide a sample in two or more parts are set on the input space and the test is performed using<br />

the corresponding points in the output space.<br />

2.2.2 Variance based methods<br />

The variance, or equivalently the standard deviation, and the entropy, are the main measures of uncertainty<br />

in the theory of Probability. The larger the variance of a random variable is, the less accurate<br />

our knowledge about it is. Decreasing the variance of a given output variable is quite an attractive<br />

target, that may be achieved sometimes by decreasing the variance of input parameters (this is<br />

not always true, remember the possibility of risk dilution). This is what makes so attractive methods<br />

that try to find out what fraction of the output uncertainty (variance) may be attributed to the uncertainty<br />

(variance) in each input.<br />

Variance based methods find their theoretical support in Sobol’s decomposition of any integrable<br />

function in the unit reference hypercube into 2 k orthogonal summands of different dimension: the<br />

mean value of the function, k functions which depend each one only on one input parameter, k(k-<br />

1)/2 functions that depend only on two input parameters, k(k-1)(k-2)/6 that depend only on three<br />

input parameters and so on. Replacing any output variable of the system model (our function) by its<br />

Sobol’s decomposition in the integral used to compute its variance produces in a straightforward<br />

manner the decomposition of the variance in its components. The quotient between each component<br />

of the variance and the total variance provides the fraction of the variance attributed to each single<br />

input parameter (main effects), each combination of only two input parameters (second order interactions)<br />

and so on. These are called Sobol’s sensitivity indices; see Sobol (1993). It is important to<br />

remark that Sobol’s decomposition is equivalent to the classical Analysis of Variance (ANOVA)<br />

used in Statistics.<br />

Several algorithms have been proposed to compute Sobol’s indices, the first one by himself. The<br />

main problem is related to the efficiency of the method. It needs one specific sample to compute<br />

each sensitivity index. Since its development, huge efforts have been done to improve the strategies<br />

(algorithms) to compute Sobol’s indices, see for example Saltelli (2002) and Tarantola et al. (2006).<br />

It remains as a powerful but expensive method (in terms of computational cost).<br />

Independently, and quite before the development of Sobol’s decomposition and Sobol’s indices, a<br />

method had been developed to compute first order sensitivity indices (equivalent to first order<br />

Sobol’s sensitivity indices): the Fourier Amplitude Sensitivity Test (FAST), see Cukier et al.<br />

(1973), Schaibly and Shuler (1973) and Cukier et al. (1975). In order to compute sensitivity indices,<br />

these authors create a search curve that covers reasonably well the input space. Each input parameter<br />

is assigned an integer frequency. Varying simultaneously all input parameters according to that<br />

set of frequencies generates the search curve. Equally spaced points are sampled from the search<br />

curve and used to perform a Fourier analysis. The coefficients corresponding to the frequency (and<br />

its harmonics) assigned to each input parameter are used to compute the corresponding sensitivity<br />

index. Saltelli et al. (1999) did further improvements of the method, among them the possibility of<br />

computing total sensitivity indices for a given input parameter (the fraction of the variance due to it<br />

and all its interactions of any order). FAST remains unable to compute sensitivity indices for interactions.<br />

Correlation Ratios are an alternative to Sobol’s method and FAST to compute first order sensitivity<br />

indices using a normal sample (SRS, LHS, etc.). So, though a method used to compute variance<br />

based sensitivity indices, it could also be considered Monte Carlo based.<br />

391

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!