Schmucker-Weidelt Lecture Notes, Aarhus, 1975 - MTNet
Schmucker-Weidelt Lecture Notes, Aarhus, 1975 - MTNet Schmucker-Weidelt Lecture Notes, Aarhus, 1975 - MTNet
Agai-n, from a finite erroneous data set we can extract only averaged estimates with statistical uncertainties, i.e. As in the linear case A(r Ir) is built up from a linear combination 0 of the data kernels N A(rolr) = C a. (rorm)Gi(ro,m). i= 1 I Introducing (6.1 9) into (6.20) we obtain which for nonlinear kernels is different from (6.8) , since i.11 this nonlinear case gi (m) qi (m) . In the linear case, two models m and m' which both sa.tisfy the data lead to the same average model in1 (r ) >. In the nonlinear case, 0 the average models are different; the difference, however, is of the second order in (m' -m) . (Exercise! ) The Backus-Gilbert procedure in the nonlinear case requires a model which already nearly fits the data. Then it can give an appraisal of the inforn~ation contents of a given data set. 6.2. Generalized - matrix inversion The generalized matrix inversion is an alternative procedure to the Backus-Gilbert method. It is strictly applicable only to linear problems, where the model under consideration consists of a set of discrete unknown parameters. Nonlinear problems are generally linearized to get in the range of this method. Assume that we :mnt T to determine the M component parameter vector p with - p = (plr...,pb5: and that we have' N functidnalr, (rules) g. i = 1, . . . , N which 1' assign to any model E a number, which when measured has the average value y . and variance var (y . ) : 1 1
Suppose that an approximation & to p is known. Then neglecting terms of order 0 (E - %) ', we have . Eq. (6.22) constitutes a system of N equations for the M parameter changes pk - pko. The generalized matrix inversion provides a so- lution to this system, irrespective of N = M, N < M, or N > M. If the rank of the system matrix agi/apk is equal to Min (M, N), the generalized matrix inversion provides in the case M = N (regular system): the ordinary solrition,M < N (overconstrained system): the least squares solution,M > N (underdetermined system): the smallest correction vector p - %. The generalized inverse exists also in the case when the rank of agi/apk is smaller than Min(M, N). After solving the system, the correction is applied to and this vector in the next step serves as a new approximation to p, thus starting an iterative scheme. It is convenient to give all data the same variance ci2 thus 0' defining as new data and matrix elements thus weighing in a least squares solution the residuals according to their accuracy which makes sense. Let be the parameter correction vector. Then (6.22) reads corresponds to the data kernels Gi(r) in the Backus-Gilbert theory). In the generalized matrix inversion first G - is decomposed into data eigenvectors u and parameter eigenvectors v : -1 -j G = U A T ~ (6.26) - =--
- Page 23 and 24: we arrive at - v The same appl-ies
- Page 25 and 26: ) Computation of ---- C for a laxez
- Page 27 and 28: The approximate interpretation of C
- Page 29 and 30: I ~ispersi-on relations I Dispersio
- Page 31 and 32: where L is a positively oriented cl
- Page 33 and 34: 1 2 3 4 CPD 1 2 3 4 CPD - g) Depend
- Page 35 and 36: The TE-mode has no vertical electri
- Page 37 and 38: i I Earth Anomalous domain 3.2. Air
- Page 39 and 40: Hence, the conductivity is to be av
- Page 41 and 42: The RHS i.s a closed line integral
- Page 43 and 44: 4. Having determined B;, the coeffi
- Page 45 and 46: 3.4. Anomalous region as basic doma
- Page 47 and 48: - 6 and 6= can be so adjusted that
- Page 49 and 50: From the generalized Green's theore
- Page 51 and 52: and y can again be so adjusted that
- Page 53 and 54: 4.2. In3ral - --- equation method L
- Page 55 and 56: The element GZx is needed for all z
- Page 57 and 58: With this knowledge of the behaviou
- Page 59 and 60: After having determined Qzr VJ,; @,
- Page 61 and 62: 4.3. The surface inteyral approach
- Page 63 and 64: F At the vertical boundaries the co
- Page 65 and 66: The four equations A A A A H = i sg
- Page 68 and 69: 6. Approaches to the inverse proble
- Page 70 and 71: to minimize the quantity a s = 12 /
- Page 72 and 73: It remains to show a way to minimiz
- Page 76 and 77: Here lJ - is a N x P matrix contain
- Page 78 and 79: small eigenvalues. The parameter ve
- Page 80 and 81: Then - 77 - A(E2 - E ) = iwu U (E -
- Page 82 and 83: whence 2k d -2k d where a = CA:(A;)
- Page 84 and 85: . 7. Basic concepts of geomagnetic
- Page 86 and 87: orders of magnitude smaller' than t
- Page 88 and 89: Elimination of - E or .,. H yields
- Page 90 and 91: Observing that rot pot rot g = - ro
- Page 92 and 93: Two special types of such anomalies
- Page 94 and 95: Model : wo+ Solution for uniform ha
- Page 96 and 97: parameter u and that the pressure d
- Page 98 and 99: (=disturbed)-variations: After magn
- Page 100 and 101: with 4 as geographic latitude. From
- Page 102 and 103: Very rapid oscillations with freque
- Page 104 and 105: ! 8. Data Collection - and Analysis
- Page 106 and 107: A horizontal electric -- field comp
- Page 108 and 109: For a data reducti.on in the fr3equ
- Page 110 and 111: Let q be the tranfer function betwe
- Page 112 and 113: . A as transfer function between A
- Page 114 and 115: -- Structural soundi~z with station
- Page 116 and 117: Since it follows that - E 1 = - T E
- Page 118 and 119: - - . the same or from different si
- Page 120 and 121: The Fourier integral - +- -io t T -
- Page 122 and 123: The weigh-t . function W is then fo
Suppose that an approximation & to p is known. Then neglecting<br />
terms of order 0 (E - %) ', we have<br />
. Eq. (6.22) constitutes a system of N equations for the M parameter<br />
changes pk - pko. The generalized matrix inversion provides a so-<br />
lution to this system, irrespective of N = M, N < M, or N > M. If<br />
the rank of the system matrix agi/apk is equal to Min (M, N), the<br />
generalized matrix inversion provides in the case M = N (regular<br />
system): the ordinary solrition,M < N (overconstrained system): the<br />
least squares solution,M > N (underdetermined system): the<br />
smallest correction vector p - %.<br />
The generalized inverse exists also in the case when the rank of<br />
agi/apk is smaller than Min(M, N).<br />
After solving the system, the correction is applied to and this<br />
vector in the next step serves as a new approximation to p, thus<br />
starting an iterative scheme.<br />
It is convenient to give all data the same variance ci2 thus<br />
0'<br />
defining as new data and matrix elements<br />
thus weighing in a least squares solution the residuals according<br />
to their accuracy which makes sense. Let<br />
be the parameter correction vector. Then (6.22) reads<br />
corresponds to the data kernels Gi(r) in the Backus-Gilbert<br />
theory). In the generalized matrix inversion first G - is decomposed<br />
into data eigenvectors u and parameter eigenvectors v :<br />
-1 -j<br />
G = U A<br />
T<br />
~<br />
(6.26)<br />
- =--