Schmucker-Weidelt Lecture Notes, Aarhus, 1975 - MTNet

Schmucker-Weidelt Lecture Notes, Aarhus, 1975 - MTNet Schmucker-Weidelt Lecture Notes, Aarhus, 1975 - MTNet

mtnet.dias.ie
from mtnet.dias.ie More from this publisher
04.08.2013 Views

Agai-n, from a finite erroneous data set we can extract only averaged estimates with statistical uncertainties, i.e. As in the linear case A(r Ir) is built up from a linear combination 0 of the data kernels N A(rolr) = C a. (rorm)Gi(ro,m). i= 1 I Introducing (6.1 9) into (6.20) we obtain which for nonlinear kernels is different from (6.8) , since i.11 this nonlinear case gi (m) qi (m) . In the linear case, two models m and m' which both sa.tisfy the data lead to the same average model in1 (r ) >. In the nonlinear case, 0 the average models are different; the difference, however, is of the second order in (m' -m) . (Exercise! ) The Backus-Gilbert procedure in the nonlinear case requires a model which already nearly fits the data. Then it can give an appraisal of the inforn~ation contents of a given data set. 6.2. Generalized - matrix inversion The generalized matrix inversion is an alternative procedure to the Backus-Gilbert method. It is strictly applicable only to linear problems, where the model under consideration consists of a set of discrete unknown parameters. Nonlinear problems are generally linearized to get in the range of this method. Assume that we :mnt T to determine the M component parameter vector p with - p = (plr...,pb5: and that we have' N functidnalr, (rules) g. i = 1, . . . , N which 1' assign to any model E a number, which when measured has the average value y . and variance var (y . ) : 1 1

Suppose that an approximation & to p is known. Then neglecting terms of order 0 (E - %) ', we have . Eq. (6.22) constitutes a system of N equations for the M parameter changes pk - pko. The generalized matrix inversion provides a so- lution to this system, irrespective of N = M, N < M, or N > M. If the rank of the system matrix agi/apk is equal to Min (M, N), the generalized matrix inversion provides in the case M = N (regular system): the ordinary solrition,M < N (overconstrained system): the least squares solution,M > N (underdetermined system): the smallest correction vector p - %. The generalized inverse exists also in the case when the rank of agi/apk is smaller than Min(M, N). After solving the system, the correction is applied to and this vector in the next step serves as a new approximation to p, thus starting an iterative scheme. It is convenient to give all data the same variance ci2 thus 0' defining as new data and matrix elements thus weighing in a least squares solution the residuals according to their accuracy which makes sense. Let be the parameter correction vector. Then (6.22) reads corresponds to the data kernels Gi(r) in the Backus-Gilbert theory). In the generalized matrix inversion first G - is decomposed into data eigenvectors u and parameter eigenvectors v : -1 -j G = U A T ~ (6.26) - =--

Suppose that an approximation & to p is known. Then neglecting<br />

terms of order 0 (E - %) ', we have<br />

. Eq. (6.22) constitutes a system of N equations for the M parameter<br />

changes pk - pko. The generalized matrix inversion provides a so-<br />

lution to this system, irrespective of N = M, N < M, or N > M. If<br />

the rank of the system matrix agi/apk is equal to Min (M, N), the<br />

generalized matrix inversion provides in the case M = N (regular<br />

system): the ordinary solrition,M < N (overconstrained system): the<br />

least squares solution,M > N (underdetermined system): the<br />

smallest correction vector p - %.<br />

The generalized inverse exists also in the case when the rank of<br />

agi/apk is smaller than Min(M, N).<br />

After solving the system, the correction is applied to and this<br />

vector in the next step serves as a new approximation to p, thus<br />

starting an iterative scheme.<br />

It is convenient to give all data the same variance ci2 thus<br />

0'<br />

defining as new data and matrix elements<br />

thus weighing in a least squares solution the residuals according<br />

to their accuracy which makes sense. Let<br />

be the parameter correction vector. Then (6.22) reads<br />

corresponds to the data kernels Gi(r) in the Backus-Gilbert<br />

theory). In the generalized matrix inversion first G - is decomposed<br />

into data eigenvectors u and parameter eigenvectors v :<br />

-1 -j<br />

G = U A<br />

T<br />

~<br />

(6.26)<br />

- =--

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!