13.08.2022 Views

advanced-algorithmic-trading

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

81

There are two important points to note about this definition:

• µ = µ(t), i.e. the mean (in general) is a function of time.

• This expectation is taken across the ensemble population of all the possible time series

that could have been generated under the time series model. In particular, it is NOT the

expression (x 1 + x 2 + ... + x k )/k (more on this below).

This definition is useful when we are able to generate many realisations of a time series model.

However in real life this is usually not the case! We are "stuck" with only one past history and

as such we will often only have access to a single historical time series for a particular asset or

situation.

So how do we proceed if we wish to estimate the mean, given that we do not have access to

these hypothetical realisations from the ensemble? Well, there are two options:

• Simply estimate the mean at each point using the observed value.

• Decompose the time series to remove any deterministic trends or seasonality effects, giving

a residual series. Once we have this series we can make the assumption that the residual

series is stationary in the mean, i.e. that µ(t) = µ, a fixed value independent of time. It

then becomes possible to estimate this constant population mean using the sample mean

¯x = ∑ n x t

t=1 n .

Definition 8.3.2. Stationary in the Mean. A time series is stationary in the mean if µ(t) = µ,

a constant.

Now that we have discussed expectation values of time series we can use this to flesh out

the definition of variance. Once again we make the simplifying assumption that the time series

under consideration is stationary in the mean. With that assumption we can define the variance:

Definition 8.3.3. Variance of a Time Series. The variance σ 2 (t) of a time series model that is

stationary in the mean is given by σ 2 (t) = E[(x t − µ) 2 ].

This is a straightforward extension of the variance defined above for random variables, except

that σ 2 (t) is a function of time. Importantly, you can see how the definition strongly relies on

the fact that the time series is stationary in the mean (i.e. that µ is not time-dependent).

You might notice that this definition leads to a tricky situation. If the variance itself varies

with time how are we supposed to estimate it from a single time series? As before, the presence

of the expectation operator E(..) requires an ensemble of time series and yet we will often only

have one!

Once again, we simplify the situation by making an assumption. In particular, and as with

the mean, we assume a constant population variance, denoted σ 2 , which is not a function of

time. Once we have made this assumption we are in a position to estimate its value using the

sample variance definition above:

∑ (xt − ¯x) 2

Var(x) =

n − 1

(8.4)

Note for this to work we need to be able to estimate the sample mean, ¯x. In addition, as with

the sample covariance defined above, we must use n − 1 in the denominator in order to make the

sample variance an unbiased estimator.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!