30.08.2013 Views

The GNSS integer ambiguities: estimation and validation

The GNSS integer ambiguities: estimation and validation

The GNSS integer ambiguities: estimation and validation

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Table 3.2: Two-dimensional example. Mean <strong>and</strong> maximum difference between success rate,<br />

Ps, based on simulations <strong>and</strong> the approximations. <strong>The</strong> success rate for which the maximum<br />

difference is obtained is given in the last row.<br />

LB bootstr. LB region UB ADOP UB region ADOP<br />

mean difference 0.005 0.018 0.001 0.018 0.004<br />

max. difference 0.010 0.105 0.003 0.065 0.009<br />

Ps 0.805 0.388 0.833 0.559 0.805<br />

Table 3.3: Approximated success rates using simulations (sim.), the lower bounds based on<br />

bootstrapping (LB bootstr.) <strong>and</strong> bounding the integration region (LB region), the upper<br />

bounds based on ADOP (UB ADOP) <strong>and</strong> bounding the integration region (UB region), <strong>and</strong><br />

the approximation based on ADOP (ADOP).<br />

example sim. LB bootstr. LB region UB ADOP UB region ADOP<br />

2-D 1.000 0.999 1.000 1.000 1.000 0.999<br />

06 01 0.818 0.749 0.698 0.848 0.942 0.765<br />

06 02 0.442 0.410 0.118 0.475 0.626 0.417<br />

10 01 0.989 0.976 0.988 0.999 0.992 0.978<br />

10 03 0.476 0.442 0.126 0.681 0.661 0.525<br />

though it is more useful to know how well the approximations work in practice. <strong>The</strong>refore,<br />

simulations were carried out for several geometry-based models, see appendix B. <strong>The</strong><br />

resulting lower <strong>and</strong> upper bounds are shown in table 3.3. <strong>The</strong> first row shows the results<br />

for the two-dimensional vc-matrix with f = 1.<br />

<strong>The</strong> results show that Kondo’s lower bound works very well for a high success rate, but in<br />

general the bootstrapped lower bound is much better. It can be concluded that Kondo’s<br />

lower bound seems to be useful only in a few cases. Firstly, to obtain a strict lower bound<br />

the precision should be high, so that the success rate is high. Even then, it depends<br />

on the minimum required success rate whether or not it is really necessary to use the<br />

approximation: if the bootstrapped success rate is somewhat lower than this minimum<br />

required success rate, Kondo’s approximation can be used to see if it is larger. <strong>The</strong><br />

minimum required success rate could be chosen such that the fixed ambiguity estimator<br />

can be considered deterministic. In this case, the discrimination tests as used in practice<br />

can be used, see section 3.5.<br />

An advantage of the bootstrapped success rate is that it is very easy to compute,<br />

since the conditional variances are already available when using the LAMBDA method.<br />

<strong>The</strong> computation of Kondo’s lower bound is more complex, since for high-dimensional<br />

problems the number of facets that bound the pull-in region can be very large.<br />

It is difficult to say which upper bound is best. For the examples with only four visible<br />

satellites the ADOP-based upper bound is better than the one obtained by bounding<br />

the integration region, but in the examples with more satellites the latter is somewhat<br />

better. All bounds are best in the case of high precisions, i.e. high success rates. All<br />

in all, one can have a little more confidence in the ADOP-based bound, since its overall<br />

performance, based on all examples, is slightly better. An advantage of the ADOPbased<br />

upper bound is that it is easy to compute, whereas using the upper bound based<br />

44 Integer ambiguity resolution

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!