MACHINE LEARNING TECHNIQUES - LASA

MACHINE LEARNING TECHNIQUES - LASA MACHINE LEARNING TECHNIQUES - LASA

01.11.2014 Views

162 Figure 7-1: Schematic illustrating the concepts of a HMM. The process is assumed to be composed of 7 hidden states. All transitions across all states are possible (fully connected). Note that for simplicity, we show only arrows across adjacent states. The schematic at the bottom shows a particular instance of transition across states over five time steps. Transitiosn across each state leads to an observation of a particular set of values for the system’s variables. As we see, the system rotates first across states 1 and 2 and then stays for two time steps on state 3. Transitions across states are described as a stochastic finite automata process. The stochasticity of the process is represented by computing a set of transition probabilities that determine the likelihood to stay in a state or to jump to another state. Transition probabilities are encapsulated in a N N represent the probability of transiting from × matrix A , whose elements { } ij i , j= 1... state i to state j , i.e. aij p( sj | si ) a N = . The sum of all elements in each row of A equals 1. Each state is associated an initial probability π i, i= 1,.. Nthat represents the likelihood to be in that state at any given point in time. In addition, one assigns to each state i a density ( ) i b o , socalled the emission probability that determines the probability of the observation to take a particular value when in state S . Depending on whether the observables take discrete versus i continuous values, we talk about a discrete versus continuous HMM. When continuous, the density may be estimated through a Gaussian Mixture Model. In this case, one associates a GMM per state. HMM are used widely in speech processing. There, often, one uses very few hidden states (at maximum 3!). The complexity of the speech is then embedded in the GMM density modeling associated at each state. © A.G.Billard 2004 – Last Update March 2011

163 Figure 7-2: Schematic illustrating the concept of emission probabilities associated to each state in a HMM. 3 o t is the observable at time t. It can take any value v∈° . 7.2.2 Estimating a HMM Designing an HMM consists in determining the hidden process that explains at best the observations. The unkown variables in a HMM are the number of states, the transitions and initial probabilities and the emission probabilities. Since the matrix A is quite large, sometimes, often people chose a sparse matrix, i.e. they set to zero most of the probabilities hence allowing only some transitions across some states. The most popular model is the so-called left-right model that allows transitions solely from state 1, to state 2, and so forth until reaching the final state. Estimating the parameters of the HMM is done through a variant of Expectation-Maximization called the Baum-Welch procedure. For a fixed topology (i.e. number of states), one estimates the set of parameters λ = ( AB , , π) by maximizing the likelihood of the observations given the model: where q { q q } PO ( | λ) = ∑ PO ( | q, λ) Pq ( | λ) (7.2) q = ,...., is one particular set of expected state transitions during the T observation 1 T steps. In the example of Figure 7-1, the set q is{ q = s, q = s , q = s, q = s , q = s }. 1 1 2 2 3 1 4 3 5 3 Computing all possible combinations of states in Equation (7.2) is prohibitive. To simplify computation, one uses dynamic programming through the so-called forward-backward computation. The principle is illustrated in Figure 7-3 left. It consists of propagating forward in time the estimate of the probability of being in a particular state given the set of observations. At each time step, the estimate of being in state i is given by: α () i = P( o... o, q = s | λ) (7.3) t 1 t t i © A.G.Billard 2004 – Last Update March 2011

163<br />

Figure 7-2: Schematic illustrating the concept of emission probabilities associated to each state in a HMM.<br />

3<br />

o<br />

t is the observable at time t. It can take any value v∈° .<br />

7.2.2 Estimating a HMM<br />

Designing an HMM consists in determining the hidden process that explains at best the<br />

observations. The unkown variables in a HMM are the number of states, the transitions and initial<br />

probabilities and the emission probabilities. Since the matrix A is quite large, sometimes, often<br />

people chose a sparse matrix, i.e. they set to zero most of the probabilities hence allowing only<br />

some transitions across some states. The most popular model is the so-called left-right model<br />

that allows transitions solely from state 1, to state 2, and so forth until reaching the final state.<br />

Estimating the parameters of the HMM is done through a variant of Expectation-Maximization<br />

called the Baum-Welch procedure. For a fixed topology (i.e. number of states), one estimates the<br />

set of parameters λ = ( AB , , π)<br />

by maximizing the likelihood of the observations given the<br />

model:<br />

where q { q q }<br />

PO ( | λ) = ∑ PO ( | q, λ) Pq ( | λ)<br />

(7.2)<br />

q<br />

= ,...., is one particular set of expected state transitions during the T observation<br />

1 T<br />

steps. In the example of Figure 7-1, the set q is{ q = s, q = s , q = s, q = s , q = s }.<br />

1 1 2 2 3 1 4 3 5 3<br />

Computing all possible combinations of states in Equation (7.2) is prohibitive. To simplify<br />

computation, one uses dynamic programming through the so-called forward-backward<br />

computation. The principle is illustrated in Figure 7-3 left. It consists of propagating forward in<br />

time the estimate of the probability of being in a particular state given the set of observations. At<br />

each time step, the estimate of being in state i is given by:<br />

α () i = P( o... o, q = s | λ)<br />

(7.3)<br />

t 1 t t i<br />

© A.G.Billard 2004 – Last Update March 2011

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!