30.08.2013 Views

The GNSS integer ambiguities: estimation and validation

The GNSS integer ambiguities: estimation and validation

The GNSS integer ambiguities: estimation and validation

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

<strong>validation</strong> criteria. <strong>The</strong> fundamental difference is that in the Bayesian approach not only<br />

the vector of observations y is considered r<strong>and</strong>om, but the vector of unknown parameters<br />

x as well. <strong>The</strong> concept of Bayesian <strong>estimation</strong> was described in e.g. (Betti et al. 1993;<br />

Gundlich <strong>and</strong> Koch 2002; Gundlich 2002; Teunissen 2001b).<br />

According to Bayes’ theorem the posterior density p(x|y) is proportional to the likelihood<br />

function p(y|x) <strong>and</strong> the prior density p(x):<br />

p(x|y) = p(y|x)p(x)<br />

p(y)<br />

(3.92)<br />

<strong>The</strong> Bayes estimate of the r<strong>and</strong>om parameter vector is defined as the conditional mean<br />

<br />

ˆxBayes = E{x|y} = xp(x|y)dx (3.93)<br />

This estimate minimizes the discrepancy between x <strong>and</strong> ˆx on the average, where ˆx is<br />

a function of y. So, if L(x, ˆx) is the measure of loss (or discrepancy), this amounts to<br />

solving the minimization problem<br />

<br />

minE{L(x,<br />

ˆx)|y} = min L(x, ˆx)p(x|y)dx (3.94)<br />

ˆx ˆx<br />

If L(x, ˆx) = x − ˆx 2 Q it follows that the Bayes estimate ˆxBayes is indeed the solution to<br />

this minimization problem.<br />

In order to apply the Bayesian approach to ambiguity resolution, with the parameter<br />

vector x = [bT aT ] T , a <strong>and</strong> b are assumed to be independent with priors<br />

⎧<br />

⎨p(a)<br />

∝<br />

⎩<br />

<br />

z∈Zn δ(a − z) (pulsetrain)<br />

(3.95)<br />

p(b) ∝ constant<br />

where δ is the Dirac function. From the orthogonal decomposition of equation (3.14)<br />

follows that the likelihood function<br />

p(y|a, b) ∝ exp{− 1<br />

<br />

â − a<br />

2<br />

2 Qâ + ˆb(a) − b 2 <br />

Qˆ } (3.96)<br />

b|â<br />

<strong>and</strong> the posterior density follows therefore as<br />

p(a, b|y) ∝ exp{− 1<br />

2<br />

<br />

â − a 2 Qâ + ˆ b(a) − b 2 Qˆ b|â<br />

<br />

} <br />

δ(a − z) (3.97)<br />

z∈Z n<br />

<strong>The</strong> marginal posterior densities p(a|y) <strong>and</strong> p(b|y) follow from integrating this joint<br />

posterior density over the domains a ∈ R n <strong>and</strong> b ∈ R p respectively. Note that the<br />

integration domain of a is not chosen as Z n since the discrete nature of a is considered<br />

to be captured by assuming the prior to be a pulsetrain.<br />

<strong>The</strong> marginal posterior PDFs are then obtained as<br />

⎧<br />

⎪⎨ p(a|y) = wa(â)<br />

⎪⎩<br />

<br />

z∈Zn δ(a − z)<br />

p(b|y) = <br />

pb|a(b|a = z, y)wz(â)<br />

z∈Z n<br />

(3.98)<br />

<strong>The</strong> Bayesian approach 65

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!