04.08.2013 Views

2-D Niblett-Bostick magnetotelluric inversion - MTNet

2-D Niblett-Bostick magnetotelluric inversion - MTNet

2-D Niblett-Bostick magnetotelluric inversion - MTNet

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

a<br />

0<br />

σ<br />

−2<br />

a σ a ( x,<br />

ω)<br />

= ωµ 0 | Z(<br />

x,<br />

ω)<br />

| ,<br />

(1)<br />

−2<br />

σ ( , ) | ( , ) | ,<br />

J. RODRÍGUEZ et al. a x ω = ωµ 0 Z x ω<br />

(1)<br />

2-D <strong>Niblett</strong>-<strong>Bostick</strong><br />

µ<br />

sa(x, T) with respect to s(x, z). The integration is defined<br />

over the entire lower half-space. The recovery of s(x, z) from<br />

sa(x, T) is clearly a nonlinear problem since F depends on<br />

s. Otherwise, the integral equation could be readily solved<br />

using any of the available methods of linear analysis. It<br />

is still possible to apply linear methods sequentially, as<br />

in traditional linearization, simply by updating a starting<br />

model on the right-hand side of the equation. Esparza and<br />

Gómez-Treviño (1996), working with the 1-D problem,<br />

showed that reasonably good results can be obtained in a<br />

single iteration using an adaptive approximation. In 1-D,<br />

the approximation is<br />

(4)<br />

F(z’,s,T) on the left hand-side represents the true Fréchet<br />

derivative for an arbitrary conductivity distribution, and<br />

Fh(z’,sa,T) on the right stands for the much simpler<br />

Fréchet derivative of a homogeneous half-space, whose<br />

conductivity is known and equal to the measured apparent<br />

conductivity. Fh is simply an attenuated cosine function<br />

that gradually vanishes with depth. The factor (1-m) drops<br />

out of the approximation when substituting expression (4)<br />

in equation (2). Using the Fréchet derivative Fh(z’,sa,T)<br />

for a homogeneous half-space (Gómez-Treviño, 1987b),<br />

the approximation in 1-D can be written as<br />

(5)<br />

where . In the original <strong>Niblett</strong>-<strong>Bostick</strong><br />

integral equation, the upper limit of integration is 0.707da<br />

and the kernel is simply (0.707sa) -1 ( x,<br />

w)<br />

0<br />

a<br />

2π σ a ( x,<br />

w)<br />

=<br />

ω<br />

2π<br />

By analogy, making the same type of assumptions and<br />

T =<br />

approximations in 2-D, equation (2) can be written as<br />

a<br />

1 ω<br />

σ a ( x, T) = F( x, x', z', σ, T) σ(<br />

x', z') dx'dz', 1−<br />

m ∫ (2)<br />

( x,<br />

w)<br />

σ<br />

a<br />

a<br />

σ (8)<br />

a( x, T ) =<br />

( x, z ) σ ( x,<br />

wd<br />

) logσ<br />

∫ Fh( x, x', z ', σa, T ) σ(<br />

x', z ') dx' dz '. (8)<br />

a<br />

a m = ,<br />

(3)<br />

a<br />

σ ( x, z)<br />

F h<br />

d logT<br />

Analytical expressions for Fh are derived in Appendix A<br />

−2<br />

, T ( ) x,<br />

T ) σ ( x,<br />

ω)<br />

= ωµ<br />

for the traditional TE and TM modes, respectively. In turn,<br />

a<br />

0 | Z(<br />

x,<br />

ω)<br />

| , σ<br />

a a (1)<br />

they are used in Appendix C to derive the corresponding<br />

( x, z ) σ a ( x,<br />

T )<br />

expressions for series and parallel apparent conductivities.<br />

( x,<br />

w)<br />

σ ( x, z)<br />

1<br />

σ j<br />

a<br />

σ a ( x, T) = F( x, x', z', σ, T) σ(<br />

x', z') dx'dz', 1−<br />

m ∫ (2)<br />

2π<br />

1 T In 2-D, to construct equation (6) the half-space is<br />

σ a ( x, T) = F( x, x', ij<br />

=<br />

z', σ, T) σ(<br />

x', z') dx'dz', d logσ<br />

divided into a large number of rectangular elements. The<br />

a1<br />

− m ∫ (2)<br />

ω<br />

m = , σ j , j = 1,...,<br />

N,<br />

integration (3) over the elements can be performed analytically<br />

F( z', σ , T) = (1 − mF ) h( z', d<br />

σ<br />

log<br />

a,<br />

T<br />

T d logσ<br />

a<br />

). m = ˆ σ (4) , as described in Appendix (3)<br />

ak , k = 1,...,<br />

M<br />

B. This is particularly useful for<br />

( xx , ', z', σ , T)<br />

d logT<br />

a ( x,<br />

w)<br />

handling the singularities N at the points of measurement,<br />

and also for the final rectangles on the sides and bottom of<br />

) xa , ( z x)<br />

, T ) Fxx ( , ', z', σ , T)<br />

ˆ σ ak = ∑ akjσ j , k = 1,...,<br />

M.<br />

(9)<br />

( x, z ) σ ( x,<br />

T )<br />

the model. It is worth j=<br />

1 remarking that each of the elements<br />

a<br />

of sa is a weighted average of all the unknown conductivity<br />

)<br />

( x, z ) σ ( x, z)<br />

σ<br />

( x,<br />

T )<br />

ak ,<br />

a<br />

values and, that on virtue of equation (2), the elements<br />

xa , ( z x)<br />

, T ) σ ( x, z)<br />

σˆ ak<br />

of matrix A are dimensionless. Furthermore, the sum of<br />

2 ∞<br />

−2 z '/ δ ⎡ 2z' 2z' ⎤<br />

a<br />

σ ( ) cos( ) sin( ) ( ') ',<br />

the elements of any row of A is identically unity, which<br />

a T = e σ z dz<br />

δ ∫ σ ( ⎢ + ⎥<br />

0 Fx ( , Tz',<br />

)<br />

(5)<br />

M<br />

M<br />

N<br />

a 1 σ , T<br />

a ⎣ δ<br />

) = (1 − mF<br />

a δ<br />

) h( z', σ a,<br />

T).<br />

1(4) 2 1<br />

2<br />

σ a ( x, T) = F( x, x', z', a σ, T⎦) σ(<br />

x', z') dx'dz', is a very useful property for checking the accuracy of the<br />

( z', σ , T)<br />

1−<br />

m ∫ C = (2) ∑ ( σ ˆ ak −σ<br />

ak ) =<br />

F( z', σ , T) = (1 − mF ) h( z', σ a,<br />

T).<br />

(4) ∑ [ σ ak −∑<br />

akjσ<br />

j ] . (10)<br />

T / δ<br />

computations<br />

2 k=<br />

1 involved. Notice<br />

2 k=<br />

1 that although j=<br />

1 equation (6)<br />

a<br />

h( z', σ a,<br />

T)<br />

F( z', σ , T)<br />

d logσ<br />

a m = ,<br />

(3) is a system of linear equations, the model it represents<br />

F is actually nonlinear, for A depends on the unknown<br />

h<br />

h( z', σ a,<br />

T)<br />

N N M<br />

N M<br />

M<br />

d logT<br />

1<br />

1 2<br />

− 1<br />

C = −<br />

( xx distribution σ through the different values of sa.<br />

h( , z', ', z σ ',<br />

a,<br />

T σ , ) T)<br />

F<br />

∑ ∑ [ −∑<br />

akia<br />

kj ] σ iσ<br />

j −∑<br />

[ − ∑ akiσ<br />

i + ∑ akiσ<br />

ak ] σ<br />

h<br />

2 i=<br />

1 j¹<br />

i=<br />

1 k=<br />

1<br />

i=<br />

1 2 k=<br />

1<br />

k=<br />

1<br />

Fh2( zσ', ∞ σ<br />

a = a,<br />

TA)<br />

a ( x,<br />

T )<br />

−2 z σ'/<br />

. δ ⎡ 2z' 2z' ⎤ (6)<br />

a<br />

σa( T) = e ⎢cos( ) + sin( ) ⎥σ(<br />

z ') dz ',<br />

δ ∫ (5)<br />

x, z )<br />

0<br />

HOPFIELD ARTIFICIAL NEURAL NETWORKS<br />

a ⎣ 2 δ ∞<br />

a −2 z '/ δa<br />

δ ⎡<br />

⎦ 2z' 2z' ⎤<br />

M<br />

a<br />

σa( T) = e cos( ) sin( ) σ(<br />

z ') dz ',<br />

δ ∫ ⎢ + ⎥<br />

(5) 1 2<br />

x, z )<br />

0<br />

a<br />

a = 503 ( − T2 z/<br />

/ δ ) 2z<br />

j ( 2z<br />

/ ) 2z<br />

⎣ δa δa<br />

⎦<br />

∑ ( σ ak ) .<br />

(11)<br />

j δ aai<br />

− j + 1 δai<br />

j+<br />

1<br />

The application of artificial 2 k=<br />

1neural<br />

networks to the<br />

a ( xa,<br />

ij T = ) e cos( ) − e cos( ), (7)<br />

. 707δ<br />

δ a = δ503<br />

T / δ a<br />

<strong>inversion</strong> of MT data has been explored in various<br />

ai<br />

δ<br />

a<br />

ai<br />

F( z', σ , T) = (1 − mF ) h( z', σ a,<br />

T).<br />

(4) directions. One Nway<br />

Nis<br />

to use the multi-layer N feed forward<br />

−1<br />

0.<br />

707σ<br />

a ) 0 . 707δ<br />

1<br />

a<br />

( z', σ , T)<br />

.<br />

neural E = network − ∑ architecture ∑ Tijσ<br />

iσ<br />

(Rummelhart j −∑<br />

I iσ<br />

i , et al., 1986) (12)<br />

−1<br />

( 0.<br />

707σ<br />

) σ = Aσ.<br />

which uses 2a<br />

set i=<br />

1 of j¹<br />

responses i=<br />

1 and models i=<br />

1 presented to the<br />

( z', σ<br />

a<br />

a (6)<br />

a,<br />

T)<br />

To solve equation (5) numerically, we divide the input and output defined neurons, respectively. During a<br />

σ = Aσ.<br />

a half-space into a large number of layers with a (6)<br />

uniform learning M phase, the network back-propagates M<br />

M<br />

1<br />

(through its<br />

2<br />

( z', σ conductivities. σ The result 2 is that the integral equation 2 can neurons and interconnection weights) errors due to misfit<br />

a,<br />

T)<br />

(<br />

a<br />

− 2z<br />

/ ) z j ( 2z<br />

1 / ) z T<br />

j δ ai<br />

− j + δ<br />

ij = −∑<br />

akia<br />

kj and I i = − ∑ akiσ<br />

i + ∑ akiσ<br />

ak . (13)<br />

ai<br />

j+<br />

1<br />

be awritten<br />

ij = e as a matrix cos( equation ) − e cos( ),<br />

kof<br />

= (7) 1the<br />

model and the obtained 2neural<br />

k=<br />

1 model. Learning k=<br />

1 from<br />

2 ∞ δ ( − 2z<br />

/ ) 2z<br />

j<br />

ai<br />

δ ( 2z<br />

/ ) 2z<br />

−2 z '/ δ ⎡ 2z' j δ 2z' ⎤<br />

a<br />

ai<br />

− j + 1 δai<br />

j+<br />

1<br />

σ ( ) acos(<br />

1 cos( ) ai cos( one response-model ), ‘pattern’ is achieved by updating the<br />

a T = e ⎢ij = e ) + sin( ) ⎥σ(<br />

z−') dz e ',<br />

(7)<br />

0<br />

sa = As. δ ∫ (5)<br />

a ⎣ δa δa<br />

⎦ δ (6) inter neuron connection weights according to a gradient<br />

j<br />

ai<br />

δ ai 1 1 ⎛ N<br />

⎞<br />

( t+<br />

1)<br />

( t)<br />

z<br />

descent minimization criteria. The process is then applied<br />

= 503 T / δ<br />

σ sgn⎜<br />

⎟<br />

j<br />

i = +<br />

.<br />

ai<br />

a<br />

The vector sa contains the data for the different to the complete 2 response-model 2 ⎜ ∑ Tijσ j + I i ⎟<br />

(14)<br />

⎝ j≠i<br />

= 1 data set, thereby ⎠ achieving<br />

707 δ periods. δ The ai<br />

a<br />

vector s represents the unknown conductivity a learning epoch.<br />

−1<br />

distribution in its discrete form, and the matrix A contains ( t+<br />

1)<br />

. 707σ<br />

a )<br />

σ i<br />

the weights of the conductivity elements for all the Once the network is trained, it recovers a model in<br />

(t )<br />

available data. The elements σ a = Aof<br />

σthe<br />

. matrix can be evaluated σ j (6) almost no time when provided with a sounding curve. The<br />

analytically as<br />

1<br />

distinctive feature of this learning approach is that there<br />

a<br />

T ij<br />

is very 1 little physics fed into the algorithms. In fact, as far<br />

T<br />

( − 2z<br />

/ ) 2z<br />

j ( 2z<br />

/ ) 2z<br />

j δ ai<br />

− j + 1 δai<br />

j(<br />

+ AA<br />

1 ) ij as the algorithms are concerned, the models and responses<br />

aij<br />

= e cos( ) − e cos( ), (7)<br />

δ<br />

used in the training sessions may or may not be related<br />

ai<br />

δ ai<br />

(7) through any physical link, the learning process is simply<br />

the same. Hidalgo and Gómez-Treviño (1996) explored<br />

where zj is the top depth of the j-th layer, and dai is the skin this approach for the 1-D problem with reasonably good<br />

i<br />

depth of the i-th measurement.<br />

results. However, extending the method to 2-D would be<br />

Geologica Acta, 8(1), 15-30 (2010)<br />

DOI: 10.1344/105.000001513<br />

1<br />

2<br />

17

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!