13.08.2022 Views

advanced-algorithmic-trading

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

205

Model will tend to stay in a particular state and then suddenly jump to a new state, remaining

in that state for some time. This is precisely the behaviour desired from such a model when

trying to apply it to market regimes. The regimes themselves are not expected to change too

quickly. Consider regulatory changes and other slow-moving macroeconomic effects, for instance.

However when they do change they are expected to persist for some time.

14.2.1 Hidden Markov Model Mathematical Specification

The corresponding joint density function for the HMM is given by (again using notation from

Murphy (2012)[71]):

p(z 1:T | x 1:T ) = p(z 1:T )p(x 1:T | z 1:T ) (14.6)

[

] [

T∏

T

]

= p(z 1 ) p(z t | z t−1 ) p(x t | z t )

(14.7)

t=2

t=1

The first line states that the joint probability of seeing the full set of hidden states and

observations is equal to the probability of seeing the hidden states multiplied by the probability

of seeing the observations, conditional on the states. This makes sense as the observations cannot

affect the states, but the hidden states do indirectly affect the observations.

The second line splits these two distributions into transition functions. The transition function

for the states is given by p(z t | z t−1 ) while that for the observations (which depend upon the

states) is given by p(x t | z t ).

As with the Markov Model description above it will be assumed that both the state and

observation transition functions are time-invariant. This means that it is possible to utilise the

K × K state transition matrix A as before with the Markov Model for that component of the

model.

However for the application considered here, namely observations of asset returns, the values

are in fact continuous. This means the model choice for the observation transition function is

more complex. A common choice is to make use of a conditional multivariate Gaussian distribution

with mean µ k and covariance σ k . This is formalised below:

p(x t | z t = k, θ) = N (x t | µ k , σ k ) (14.8)

That is, if the state z t is currently equal to k, then the probability of seeing observation x t ,

given the parameters of the model θ, is distributed as a multivariate Gaussian.

In order to make this a little clearer Figure 14.2 shows the evolution of the states z t and how

they lead indirectly to the evolution of the observations, x t :

14.2.2 Filtering of Hidden Markov Models

With the joint density function specified it remains to consider the how the model will be utilised.

In general state-space modelling there are often three main tasks of interest: Filtering, smoothing

and prediction. The previous chapter on State-Space Models and the Kalman Filter described

these briefly. They will be repeated here for completeness:

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!