13.08.2022 Views

advanced-algorithmic-trading

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

190

E[y t+1 |D t ] = E[Ft+1θ T t + v t+1 |D t ] (13.19)

= Ft+1E[θ T t+1 |D t ] (13.20)

= Ft+1a T t+1 (13.21)

= f t+1 (13.22)

Where does this come from? Let us try and follow through the analysis.

Since the likelihood function for today’s observation y t , given today’s state θ t , is normally

distributed with mean Ft T θ t and variance-covariance V t (see above), we have that the expectation

of tomorrow’s observation y t+1 , given our data today, D t , is precisely the expectation of the

multivariate normal for the likelihood, namely E[Ft+1θ T t +v t+1 |D t ]. Once we make this connection

it simply reduces to applying rules about the expectation operator to the remaining matrices

and vectors, ultimately leading us to f t+1 .

However it is not sufficient to simply calculate the mean, we must also know the variance of

tomorrow’s observation given today’s data, otherwise we cannot truly characterise the distribution

on which to draw tomorrow’s prediction.

Var[y t+1 |D t ] = Var[Ft+1θ T t + v t+1 |D t ] (13.23)

= Ft+1 T Var[θ t+1 |D t ]F t+1 + V t+1 (13.24)

= Ft+1R T t+1 F t+1 + V t+1 (13.25)

= Q t+1 (13.26)

Now that we have the expectation and variance of tomorrow’s observation, given today’s

data, we are able to provide the general forecast for k steps ahead, by fully characterising the

distribution on which these predictions are drawn:

y t+k |D t ∼ N (f t+k|t , Q t+k|t ) (13.27)

Note that I have used some odd notation here. What does it mean to have a subscript of

t + k|t? It allows us to write a convenient shorthand for the following:

f t+k|t = Ft+kG T k−1 a t+1 (13.28)

Q t+k|t = Ft+kR T t+k F t+k + V t+k (13.29)

k∑

R t+k|t = G k−1 R t+1 (G k−1 ) T + G k−j W t+j (G k−j ) T (13.30)

As I have mentioned repeatedly in this chapter we should not concern ourselves too much

with the verboseness of the Kalman Filter and its notation. Instead we should think about the

overall procedure and its Bayesian underpinnings.

We now have the means of predicting new values of the series. This is an alternative to the

predictions produced by combining ARIMA and GARCH.

j=2

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!