08.12.2012 Views

Journal of Software - Academy Publisher

Journal of Software - Academy Publisher

Journal of Software - Academy Publisher

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

JOURNAL OF SOFTWARE, VOL. 6, NO. 5, MAY 2011 939<br />

⎧ k + = k + k<br />

⎪<br />

⎪<br />

⎪Δ<br />

k + = Δ k<br />

⎨ΔVk<br />

+ = ΔVk<br />

⎪<br />

= +<br />

⎪ k k k<br />

⎪⎩<br />

yk<br />

= h(<br />

k − Δ k ) − ΔVk<br />

+ vk<br />

*<br />

* *<br />

e 1 e w<br />

H 1 H<br />

1<br />

* *<br />

x x e<br />

x H<br />

(5)<br />

where equations Δ H k +1 = ΔH<br />

k and Δ Vk +1 = ΔVk<br />

are<br />

the reflections <strong>of</strong> the assumption above, meaning that the<br />

error components are constant in the area concerned.<br />

The compact form <strong>of</strong> (5) is<br />

⎧ k + = k + k<br />

⎪<br />

⎨ k = k + ⋅ k<br />

⎪<br />

⎩y<br />

k = h(<br />

k − Δ k ) − ΔVk<br />

+ vk<br />

*<br />

'<br />

e 1 e w<br />

*<br />

x x F e<br />

x H<br />

(6)<br />

* ⎡ e ⎤ k<br />

⎢ ⎥ ⎡1<br />

where ek<br />

= ⎢ΔHk<br />

⎥ , F = ⎢<br />

⎢ ⎥<br />

⎣ΔV<br />

⎣0<br />

k ⎦<br />

0<br />

1<br />

0<br />

0<br />

0<br />

0<br />

⎡wk<br />

⎤<br />

0⎤<br />

' ⎥<br />

0<br />

⎥, w<br />

⎢<br />

k =<br />

⎢<br />

0 .<br />

⎥<br />

⎦<br />

⎢⎣<br />

0 ⎥⎦<br />

III. RECURSIVE BAYESIAN ESTIMATION<br />

According to Bayesian theory, a Bayesian estimation<br />

problem is defined by the joint density <strong>of</strong> the parameters<br />

and the observations, p ( x,<br />

y)<br />

= p(<br />

y | x)<br />

p(<br />

x)<br />

. The<br />

estimator under the minimum mean square error criterion<br />

is the posterior mean x ˆ MS = ∫ xp(<br />

x | y)<br />

dx [6]. So if<br />

n<br />

R<br />

there exists some relationship between the observation<br />

and the parameter, then the parameter can be estimated<br />

by the observation. From the information theory’s point<br />

<strong>of</strong> view, the observation contains the information <strong>of</strong> the<br />

parameter when there exists stochastic relationship<br />

between the two. So we can make use <strong>of</strong> the observation<br />

to estimate the parameter.<br />

Let Y k be the augmented measurement vector<br />

consisting <strong>of</strong> all the measurements up to time step k.<br />

From Bayesian formula [6] and the new system model (6),<br />

we have the posterior probability density function update:<br />

p(<br />

yk<br />

| ek,<br />

Yk<br />

−1)<br />

⋅ p(<br />

ek<br />

| Yk<br />

−1)<br />

p(<br />

ek<br />

| Yk<br />

) =<br />

p(<br />

y | Y )<br />

(7)<br />

k<br />

k−1<br />

−1<br />

*<br />

= αk<br />

⋅ pv<br />

( yk<br />

−h(<br />

xk<br />

−ek<br />

−ΔH<br />

k)<br />

+ ΔVk<br />

) ⋅ p(<br />

ek<br />

| Yk<br />

−1)<br />

k<br />

where<br />

*<br />

α k = ∫ pv ( yk<br />

− h(<br />

x<br />

N k − ek<br />

− ΔHk<br />

) + ΔVk<br />

) ⋅ p(<br />

ek<br />

| Yk<br />

⋅ dek<br />

R k − 1)<br />

.<br />

The priori probability density function update is<br />

p(<br />

ek<br />

+ 1 | Yk<br />

) = ∫ p(<br />

ek<br />

ek<br />

Yk<br />

p ek<br />

Yk<br />

de<br />

N + 1 | , ) ⋅ ( | ) ⋅ k<br />

R<br />

. (8)<br />

* *<br />

= p ( e − e ) ⋅ p(<br />

e | Y ) ⋅ de<br />

∫<br />

N w<br />

R k<br />

k+<br />

1<br />

Given the initial prior density p ( e0 | Y−<br />

1)<br />

= p(<br />

e0)<br />

,<br />

we can recursively generate the posterior probability<br />

density through equation (7) and (8). And the state<br />

© 2011 ACADEMY PUBLISHER<br />

k<br />

k<br />

k<br />

k<br />

estimate is e ˆ k = E[ ek<br />

| Yk<br />

] with covariance matrix<br />

ˆ<br />

T<br />

ˆ ) ( ˆ<br />

k E[( k k k k ) | Yk<br />

] e e e e P − ⋅ − = .<br />

The recursive Bayesian equations above are the<br />

theoretical solution for model (6) and are intractable due<br />

to the complexities <strong>of</strong> the posterior and priori probability<br />

density function in the non-linear model. Usually the<br />

numerical methods should be used for calculating the<br />

result, such as point mass filter (PMF) or particle filter<br />

(PF). This paper uses the PF to solve the model.<br />

IV. PARTICAL FILTER<br />

The fundamental <strong>of</strong> particle filter is to use particles<br />

with weights for representing the probability density<br />

function (PDF) and uses the recursion <strong>of</strong> the particle set<br />

to replace the recursion <strong>of</strong> the posterior density function.<br />

When the complex probability density function is<br />

represented by the particle set the integration in α k and<br />

(8) can be easily calculated according to Monte Carlo<br />

integration theory.<br />

i i M<br />

Let { e k , wk} i=<br />

1 be particle set for the posterior PDF<br />

M<br />

∑<br />

i=<br />

1<br />

i<br />

p( e k | Yk<br />

) , where w = 1 , then<br />

k<br />

k<br />

k<br />

M<br />

∑<br />

i=<br />

1<br />

i<br />

p(<br />

| Y ) ≈ w ⋅ ( e − e )<br />

e δ (9)<br />

where δ (⋅)<br />

is Dirac-Delta function. The estimator and its<br />

covariance are<br />

=<br />

=<br />

k<br />

∑<br />

eˆ<br />

k<br />

= E[<br />

e<br />

=<br />

=<br />

∫ ek<br />

⋅ N ∑<br />

R<br />

M<br />

∑<br />

i=<br />

1<br />

k<br />

| Y ]<br />

i<br />

k<br />

k<br />

M<br />

i=<br />

1<br />

w ⋅e<br />

i<br />

k<br />

w<br />

i<br />

k<br />

T<br />

∫ ( ek<br />

−eˆ<br />

k)<br />

⋅(<br />

ek<br />

−eˆ<br />

k)<br />

⋅<br />

N ∑<br />

R<br />

M<br />

i=<br />

1<br />

⋅<br />

i=<br />

1<br />

i i<br />

i T<br />

w ⋅(<br />

e −eˆ<br />

) ⋅(<br />

e −eˆ<br />

)<br />

k<br />

k<br />

k<br />

k<br />

k<br />

k<br />

k<br />

k<br />

k<br />

M<br />

k<br />

( e<br />

T<br />

Pˆ<br />

= E[(<br />

e −eˆ<br />

) ⋅(<br />

e −eˆ<br />

) | Y ]<br />

k<br />

k<br />

i<br />

k<br />

i<br />

− e ) ⋅ de<br />

δ (10)<br />

k<br />

k<br />

i<br />

i<br />

w ⋅δ<br />

( e −e<br />

) ⋅de<br />

(11)<br />

The recursion <strong>of</strong> the particle set is simply explained<br />

below:<br />

i i M<br />

Let { e k−1 , wk−1} i=<br />

1 be the particle set at time k-1<br />

which represents the posterior PDF p( e k −1 | Yk<br />

−1)<br />

. At<br />

time k, we first draw sample i<br />

e k from an easy sampling<br />

i<br />

distribution q( e k | ek<br />

−1,<br />

yk<br />

) and then update its weight<br />

using<br />

w<br />

i<br />

k<br />

∝ w<br />

i<br />

k −1<br />

i i i<br />

p(<br />

yk<br />

| ek<br />

) ⋅ p(<br />

ek<br />

| ek<br />

⋅<br />

i i<br />

q(<br />

e | e , y )<br />

k<br />

k<br />

k −1<br />

k<br />

k<br />

k<br />

−1<br />

)<br />

k<br />

k<br />

(12)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!