13.08.2022 Views

advanced-algorithmic-trading

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

189

This says that the likelihood function of the current observation y t is distributed as a multivariate

normal distribution with mean Ft T θ t and variance-covariance V t . We have already

outlined these terms in the list above.

Finally we have the posterior of θ t :

θ t |D t ∼ N (m t , C t ) (13.10)

This says that the posterior view of the current state θ t , given our current knowledge at time

t is distributed as a multivariate normal distribution with mean m t and variance-covariance C t .

The Kalman Filter is what links all of these terms together for t = 1, . . . , n. We we will not

derive where these values actually come from. Instead they will simply be states. Thankfully we

can use library implementations in Python to carry out the "heavy lifting" calculations for us:

a t = G t m t−1 (13.11)

R t = G t C t−1 G T t + W t (13.12)

e t = y t − f t (13.13)

m t = a t + A t e t (13.14)

f t = F T t a t (13.15)

Q t = F T t R t F t + V t (13.16)

A t = R t F t Q −1

t (13.17)

C t = R t − A t Q t A T t (13.18)

Clearly that is a lot of notation. As I said above we need not worry about the excessive

verboseness of the Kalman Filter since we can simply use libraries in Python to calculate the

algorithm for us.

How does it all fit together? Well, f t is the predicated value of the observation at time t,

where we make this prediction at time t − 1. Since e t = y t − f t , we can see easily that e t is the

error associated with the forecast–the difference between f and y.

Importantly the posterior mean is a weighting of the prior mean and the forecast error, since

m t = a t + A t e t = G t m t−1 + A t e t , where G t and A t are our weighting matrices.

Now that we have an algorithmic procedure for updating our views on the observations and

states we can use it to make predictions as well as smooth the data.

13.2.2 Prediction

The Bayesian approach to the Kalman Filter leads naturally to a mechanism for prediction.

Since we have our posterior estimate for the state θ t we can predict the next day’s values by

considering the mean value of the observation.

Let us take the expected value of the observation tomorrow given our knowledge of the data

today:

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!