MACHINE LEARNING TECHNIQUES - LASA
MACHINE LEARNING TECHNIQUES - LASA
MACHINE LEARNING TECHNIQUES - LASA
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
165<br />
these two quantities to compute the probability (, i j)<br />
illustrated in Figure 7-5.<br />
ξ of transiting from i to j at time t. This is<br />
t<br />
Figure 7-5: Combining the forward and backward procedure to estimate the transition probability across the<br />
arc joining state i and state j.<br />
The probability of being in the state i at time t can then be inferred by simply summing over all the<br />
transition probabilities and is given by:<br />
γ () i = ∑ ξ (, i j)<br />
(7.4)<br />
t<br />
j=<br />
1... N<br />
From the computation of both ξt<br />
(, i j)<br />
and γ<br />
t<br />
() i for each state, i.e. i, j = 1... N , one can now<br />
derive the update rule for all open parameters. HMM re-estimation of the parameter hence starts<br />
from an estimate of the parameters λ = ( AB , , π)<br />
and updates these iteratively acoording to:<br />
The initial probabilities follow immediately from the computation of the probability to be in any<br />
given state at time t=1:<br />
ˆ π = γ () i<br />
(7.5)<br />
i<br />
1<br />
t<br />
The transition probabilities are then given by computing the sum of probabilities of transiting to<br />
this particular state over the course of the whole sequence. This is normalized by the probability<br />
of being in any state (the reader will here recognize the Bayes rule, with aˆij<br />
being the conditional<br />
probability of being in j when being in j):<br />
aˆ<br />
ij<br />
T −1<br />
ξ (, i j)<br />
t=<br />
1 t<br />
T −1<br />
= ∑ ∑ =<br />
t 1<br />
γ () i<br />
t<br />
(7.6)<br />
© A.G.Billard 2004 – Last Update March 2011