Schmucker-Weidelt Lecture Notes, Aarhus, 1975 - MTNet

Schmucker-Weidelt Lecture Notes, Aarhus, 1975 - MTNet Schmucker-Weidelt Lecture Notes, Aarhus, 1975 - MTNet

mtnet.dias.ie
from mtnet.dias.ie More from this publisher
04.08.2013 Views

Here lJ - is a N x P matrix containing the P eigenvectors belonging to non-zero eigenvalues of the problem and 1 - is a M x P matrix with the P eigenvectors of the problem associated with non-zero eigenvalues. P is the rank of 5. - Finally = A is a P x P diagonal matrix containing just the P nonzero eigenvalues X . Then the generalized 'inverse of G .- is j - H always exists. In the cases mentioned above, - g specializes as follows For the proof one has to take into account that because of the orthogonality and normalization of the eigenvectors one al.ways has and in addition for T v v = I (=P-component unit matrix) === = (6.30a) P For P < Min(M,N) H - cannot be expressed i3 terms of G. - The generalized inverse provides a solution vector - - < x > = H y . Its relation to the true solution - x is given by .=_Ax, -- - where - A is the (M x 21) resolution matrix

- Only for P M, _A - is the M component unit matrix, admitting an exact determination of - x. (A corresponds approximately to the resolution function A (ro 1 r) in the Backus--Gilbert theory. But there are differences: - is symmetrical, A(s[r) not, the norm of A - is small if the resolution is poor, the norm of A(ro[r) is always 1. ) In generalized matrix inversion exists the same trade-off as in the Backus-Gilbert method. Cbnsider the covariance matrix of the change: of.< - x > due to random changes of x: In particular the variance of is P . Vk var (x ) = a2 ZI (3) Z , j=l j k 0 k = 1,2, .:., M showing that the variance of xk is largely due to the small eigenvalues X By discarding small eigenvalues and the corresponding j' eigenvectors, the accuracy of x can be increased at the expense k of the resolution, since A - will deviate more from a (M x M) unit matrix if instead of the required M eigenvectors a smaller number is used. - The model will in general not.reproduce the data. The repro- duced data are = G - - =GWy - - =Ex - with the information density matrix T - B=GH=LJQ. - - - - (6.36) . . Only in the case N = PI 2 is a unit matrix. In particular, B - describes the linear dependence of the data in the overconstrained case. High diagonal values will show that this date contains specific information on the model which is not contained in other data. On the other hand a large off-diagonal value shows that this information is also contained in another data. Valuable insight in the particular inverse problem can be obtained by considering the parameter eigenvectors corresponding to high and

- Only for P M, _A - is the M component unit matrix, admitting an<br />

exact determination of - x. (A corresponds approximately to the resolution<br />

function A (ro 1 r) in the Backus--Gilbert theory. But there<br />

are differences: - is symmetrical, A(s[r) not, the norm of A - is<br />

small if the resolution is poor, the norm of A(ro[r) is always 1. )<br />

In generalized matrix inversion exists the same trade-off as in the<br />

Backus-Gilbert method. Cbnsider the covariance matrix of the change:<br />

of.< - x > due to random changes of x:<br />

In particular the variance of is<br />

P . Vk<br />

var (x ) = a2 ZI (3)<br />

Z ,<br />

j=l j<br />

k 0 k = 1,2, .:., M<br />

showing that the variance of xk is largely due to the small eigenvalues<br />

X By discarding small eigenvalues and the corresponding<br />

j'<br />

eigenvectors, the accuracy of x can be increased at the expense<br />

k<br />

of the resolution, since A - will deviate more from a (M x M) unit<br />

matrix if instead of the required M eigenvectors a smaller number<br />

is used. -<br />

The model will in general not.reproduce the data. The repro-<br />

duced data are<br />

= G - - =GWy - - =Ex -<br />

with the information density matrix<br />

T - B=GH=LJQ. - - - -<br />

(6.36)<br />

. . Only in the case N = PI 2 is a unit matrix. In particular, B -<br />

describes the linear dependence of the data in the overconstrained<br />

case. High diagonal values will show that this date contains<br />

specific information on the model which is not contained in other<br />

data. On the other hand a large off-diagonal value shows that this<br />

information is also contained in another data.<br />

Valuable insight in the particular inverse problem can be obtained<br />

by considering the parameter eigenvectors corresponding to high and

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!