Schmucker-Weidelt Lecture Notes, Aarhus, 1975 - MTNet
Schmucker-Weidelt Lecture Notes, Aarhus, 1975 - MTNet Schmucker-Weidelt Lecture Notes, Aarhus, 1975 - MTNet
Here lJ - is a N x P matrix containing the P eigenvectors belonging to non-zero eigenvalues of the problem and 1 - is a M x P matrix with the P eigenvectors of the problem associated with non-zero eigenvalues. P is the rank of 5. - Finally = A is a P x P diagonal matrix containing just the P nonzero eigenvalues X . Then the generalized 'inverse of G .- is j - H always exists. In the cases mentioned above, - g specializes as follows For the proof one has to take into account that because of the orthogonality and normalization of the eigenvectors one al.ways has and in addition for T v v = I (=P-component unit matrix) === = (6.30a) P For P < Min(M,N) H - cannot be expressed i3 terms of G. - The generalized inverse provides a solution vector - - < x > = H y . Its relation to the true solution - x is given by .=_Ax, -- - where - A is the (M x 21) resolution matrix
- Only for P M, _A - is the M component unit matrix, admitting an exact determination of - x. (A corresponds approximately to the resolution function A (ro 1 r) in the Backus--Gilbert theory. But there are differences: - is symmetrical, A(s[r) not, the norm of A - is small if the resolution is poor, the norm of A(ro[r) is always 1. ) In generalized matrix inversion exists the same trade-off as in the Backus-Gilbert method. Cbnsider the covariance matrix of the change: of.< - x > due to random changes of x: In particular the variance of is P . Vk var (x ) = a2 ZI (3) Z , j=l j k 0 k = 1,2, .:., M showing that the variance of xk is largely due to the small eigenvalues X By discarding small eigenvalues and the corresponding j' eigenvectors, the accuracy of x can be increased at the expense k of the resolution, since A - will deviate more from a (M x M) unit matrix if instead of the required M eigenvectors a smaller number is used. - The model will in general not.reproduce the data. The repro- duced data are = G - - =GWy - - =Ex - with the information density matrix T - B=GH=LJQ. - - - - (6.36) . . Only in the case N = PI 2 is a unit matrix. In particular, B - describes the linear dependence of the data in the overconstrained case. High diagonal values will show that this date contains specific information on the model which is not contained in other data. On the other hand a large off-diagonal value shows that this information is also contained in another data. Valuable insight in the particular inverse problem can be obtained by considering the parameter eigenvectors corresponding to high and
- Page 25 and 26: ) Computation of ---- C for a laxez
- Page 27 and 28: The approximate interpretation of C
- Page 29 and 30: I ~ispersi-on relations I Dispersio
- Page 31 and 32: where L is a positively oriented cl
- Page 33 and 34: 1 2 3 4 CPD 1 2 3 4 CPD - g) Depend
- Page 35 and 36: The TE-mode has no vertical electri
- Page 37 and 38: i I Earth Anomalous domain 3.2. Air
- Page 39 and 40: Hence, the conductivity is to be av
- Page 41 and 42: The RHS i.s a closed line integral
- Page 43 and 44: 4. Having determined B;, the coeffi
- Page 45 and 46: 3.4. Anomalous region as basic doma
- Page 47 and 48: - 6 and 6= can be so adjusted that
- Page 49 and 50: From the generalized Green's theore
- Page 51 and 52: and y can again be so adjusted that
- Page 53 and 54: 4.2. In3ral - --- equation method L
- Page 55 and 56: The element GZx is needed for all z
- Page 57 and 58: With this knowledge of the behaviou
- Page 59 and 60: After having determined Qzr VJ,; @,
- Page 61 and 62: 4.3. The surface inteyral approach
- Page 63 and 64: F At the vertical boundaries the co
- Page 65 and 66: The four equations A A A A H = i sg
- Page 68 and 69: 6. Approaches to the inverse proble
- Page 70 and 71: to minimize the quantity a s = 12 /
- Page 72 and 73: It remains to show a way to minimiz
- Page 74 and 75: Agai-n, from a finite erroneous dat
- Page 78 and 79: small eigenvalues. The parameter ve
- Page 80 and 81: Then - 77 - A(E2 - E ) = iwu U (E -
- Page 82 and 83: whence 2k d -2k d where a = CA:(A;)
- Page 84 and 85: . 7. Basic concepts of geomagnetic
- Page 86 and 87: orders of magnitude smaller' than t
- Page 88 and 89: Elimination of - E or .,. H yields
- Page 90 and 91: Observing that rot pot rot g = - ro
- Page 92 and 93: Two special types of such anomalies
- Page 94 and 95: Model : wo+ Solution for uniform ha
- Page 96 and 97: parameter u and that the pressure d
- Page 98 and 99: (=disturbed)-variations: After magn
- Page 100 and 101: with 4 as geographic latitude. From
- Page 102 and 103: Very rapid oscillations with freque
- Page 104 and 105: ! 8. Data Collection - and Analysis
- Page 106 and 107: A horizontal electric -- field comp
- Page 108 and 109: For a data reducti.on in the fr3equ
- Page 110 and 111: Let q be the tranfer function betwe
- Page 112 and 113: . A as transfer function between A
- Page 114 and 115: -- Structural soundi~z with station
- Page 116 and 117: Since it follows that - E 1 = - T E
- Page 118 and 119: - - . the same or from different si
- Page 120 and 121: The Fourier integral - +- -io t T -
- Page 122 and 123: The weigh-t . function W is then fo
- Page 124 and 125: Two convenient filters are 3 sinx I
- Only for P M, _A - is the M component unit matrix, admitting an<br />
exact determination of - x. (A corresponds approximately to the resolution<br />
function A (ro 1 r) in the Backus--Gilbert theory. But there<br />
are differences: - is symmetrical, A(s[r) not, the norm of A - is<br />
small if the resolution is poor, the norm of A(ro[r) is always 1. )<br />
In generalized matrix inversion exists the same trade-off as in the<br />
Backus-Gilbert method. Cbnsider the covariance matrix of the change:<br />
of.< - x > due to random changes of x:<br />
In particular the variance of is<br />
P . Vk<br />
var (x ) = a2 ZI (3)<br />
Z ,<br />
j=l j<br />
k 0 k = 1,2, .:., M<br />
showing that the variance of xk is largely due to the small eigenvalues<br />
X By discarding small eigenvalues and the corresponding<br />
j'<br />
eigenvectors, the accuracy of x can be increased at the expense<br />
k<br />
of the resolution, since A - will deviate more from a (M x M) unit<br />
matrix if instead of the required M eigenvectors a smaller number<br />
is used. -<br />
The model will in general not.reproduce the data. The repro-<br />
duced data are<br />
= G - - =GWy - - =Ex -<br />
with the information density matrix<br />
T - B=GH=LJQ. - - - -<br />
(6.36)<br />
. . Only in the case N = PI 2 is a unit matrix. In particular, B -<br />
describes the linear dependence of the data in the overconstrained<br />
case. High diagonal values will show that this date contains<br />
specific information on the model which is not contained in other<br />
data. On the other hand a large off-diagonal value shows that this<br />
information is also contained in another data.<br />
Valuable insight in the particular inverse problem can be obtained<br />
by considering the parameter eigenvectors corresponding to high and