MACHINE LEARNING TECHNIQUES - LASA
MACHINE LEARNING TECHNIQUES - LASA MACHINE LEARNING TECHNIQUES - LASA
162 Figure 7-1: Schematic illustrating the concepts of a HMM. The process is assumed to be composed of 7 hidden states. All transitions across all states are possible (fully connected). Note that for simplicity, we show only arrows across adjacent states. The schematic at the bottom shows a particular instance of transition across states over five time steps. Transitiosn across each state leads to an observation of a particular set of values for the system’s variables. As we see, the system rotates first across states 1 and 2 and then stays for two time steps on state 3. Transitions across states are described as a stochastic finite automata process. The stochasticity of the process is represented by computing a set of transition probabilities that determine the likelihood to stay in a state or to jump to another state. Transition probabilities are encapsulated in a N N represent the probability of transiting from × matrix A , whose elements { } ij i , j= 1... state i to state j , i.e. aij p( sj | si ) a N = . The sum of all elements in each row of A equals 1. Each state is associated an initial probability π i, i= 1,.. Nthat represents the likelihood to be in that state at any given point in time. In addition, one assigns to each state i a density ( ) i b o , socalled the emission probability that determines the probability of the observation to take a particular value when in state S . Depending on whether the observables take discrete versus i continuous values, we talk about a discrete versus continuous HMM. When continuous, the density may be estimated through a Gaussian Mixture Model. In this case, one associates a GMM per state. HMM are used widely in speech processing. There, often, one uses very few hidden states (at maximum 3!). The complexity of the speech is then embedded in the GMM density modeling associated at each state. © A.G.Billard 2004 – Last Update March 2011
163 Figure 7-2: Schematic illustrating the concept of emission probabilities associated to each state in a HMM. 3 o t is the observable at time t. It can take any value v∈° . 7.2.2 Estimating a HMM Designing an HMM consists in determining the hidden process that explains at best the observations. The unkown variables in a HMM are the number of states, the transitions and initial probabilities and the emission probabilities. Since the matrix A is quite large, sometimes, often people chose a sparse matrix, i.e. they set to zero most of the probabilities hence allowing only some transitions across some states. The most popular model is the so-called left-right model that allows transitions solely from state 1, to state 2, and so forth until reaching the final state. Estimating the parameters of the HMM is done through a variant of Expectation-Maximization called the Baum-Welch procedure. For a fixed topology (i.e. number of states), one estimates the set of parameters λ = ( AB , , π) by maximizing the likelihood of the observations given the model: where q { q q } PO ( | λ) = ∑ PO ( | q, λ) Pq ( | λ) (7.2) q = ,...., is one particular set of expected state transitions during the T observation 1 T steps. In the example of Figure 7-1, the set q is{ q = s, q = s , q = s, q = s , q = s }. 1 1 2 2 3 1 4 3 5 3 Computing all possible combinations of states in Equation (7.2) is prohibitive. To simplify computation, one uses dynamic programming through the so-called forward-backward computation. The principle is illustrated in Figure 7-3 left. It consists of propagating forward in time the estimate of the probability of being in a particular state given the set of observations. At each time step, the estimate of being in state i is given by: α () i = P( o... o, q = s | λ) (7.3) t 1 t t i © A.G.Billard 2004 – Last Update March 2011
- Page 111 and 112: 111 One can then use the above expr
- Page 113 and 114: 113 5.9.2 Equivalence of Gaussian P
- Page 115 and 116: 115 5.9.3 Curse of dimensionality,
- Page 117 and 118: 117 The weight w determines the slo
- Page 119 and 120: 119 Figure 5-21: Example of success
- Page 121 and 122: 121 • its performance tends to de
- Page 123 and 124: 123 neurons. Furthermore, they lear
- Page 125 and 126: 125 The sigmoid f x ( x) ( ) = tanh
- Page 127 and 128: 127 6.3.2 Information Theory and th
- Page 129 and 130: 129 ( R) ⎛⎛det ⎞⎞ I( x, y)
- Page 131 and 132: 131 y= ∑ w x ) of Because of the
- Page 133 and 134: 133 6.5 Willshaw net David Willshaw
- Page 135 and 136: 135 6.6.1 Weights bounds One of the
- Page 137 and 138: 137 Figure 6-11: The weight vector
- Page 139 and 140: 139 6.6.4 Oja’s one Neuron Model
- Page 141 and 142: 141 If y i and y are highly correla
- Page 143 and 144: 143 Foldiak’s second model allows
- Page 145 and 146: 145 ∂ ∂ J 1 = fy − 1 2 λ 1yf
- Page 147 and 148: 147 6.8 The Self-Organizing Map (SO
- Page 149 and 150: 149 6. Decrease the size of the nei
- Page 151 and 152: 151 the resulting distribution is a
- Page 153 and 154: 153 To simplify the description of
- Page 155 and 156: 155 C µν −1 where ( ) is the µ
- Page 157 and 158: 157 The continuous time Hopfield ne
- Page 159 and 160: 159 ∂f If the slope is negative,
- Page 161: 161 7.2 Hidden Markov Models Hidden
- Page 165 and 166: 165 these two quantities to compute
- Page 167 and 168: 167 7.2.4 Decoding an HMM There are
- Page 169 and 170: 169 7.2.5 Further Readings Rabiner,
- Page 171 and 172: 171 7.3.1 Principle In reinforcemen
- Page 173 and 174: 173 general, acting to maximize imm
- Page 175 and 176: 175 of reinforcement learning makes
- Page 177 and 178: 177 8 Genetic Algorithms We conclud
- Page 179 and 180: 179 However, you must define geneti
- Page 181 and 182: 181 Often the crossover operator an
- Page 183 and 184: 183 ( A λI) x 0 − = (8.5) where
- Page 185 and 186: 185 Joint probability: The joint pr
- Page 187 and 188: 187 The two most classical distribu
- Page 189 and 190: 189 9.2.7 Statistical Independence
- Page 191 and 192: 191 1 1 1 1 h x ∫ a b a b 0 a a
- Page 193 and 194: 193 9.4 Estimators 9.4.1 Gradient d
- Page 195 and 196: 195 9.4.2.1 Maximum Likelihood Mach
- Page 197 and 198: 197 10 References • Machine Learn
163<br />
Figure 7-2: Schematic illustrating the concept of emission probabilities associated to each state in a HMM.<br />
3<br />
o<br />
t is the observable at time t. It can take any value v∈° .<br />
7.2.2 Estimating a HMM<br />
Designing an HMM consists in determining the hidden process that explains at best the<br />
observations. The unkown variables in a HMM are the number of states, the transitions and initial<br />
probabilities and the emission probabilities. Since the matrix A is quite large, sometimes, often<br />
people chose a sparse matrix, i.e. they set to zero most of the probabilities hence allowing only<br />
some transitions across some states. The most popular model is the so-called left-right model<br />
that allows transitions solely from state 1, to state 2, and so forth until reaching the final state.<br />
Estimating the parameters of the HMM is done through a variant of Expectation-Maximization<br />
called the Baum-Welch procedure. For a fixed topology (i.e. number of states), one estimates the<br />
set of parameters λ = ( AB , , π)<br />
by maximizing the likelihood of the observations given the<br />
model:<br />
where q { q q }<br />
PO ( | λ) = ∑ PO ( | q, λ) Pq ( | λ)<br />
(7.2)<br />
q<br />
= ,...., is one particular set of expected state transitions during the T observation<br />
1 T<br />
steps. In the example of Figure 7-1, the set q is{ q = s, q = s , q = s, q = s , q = s }.<br />
1 1 2 2 3 1 4 3 5 3<br />
Computing all possible combinations of states in Equation (7.2) is prohibitive. To simplify<br />
computation, one uses dynamic programming through the so-called forward-backward<br />
computation. The principle is illustrated in Figure 7-3 left. It consists of propagating forward in<br />
time the estimate of the probability of being in a particular state given the set of observations. At<br />
each time step, the estimate of being in state i is given by:<br />
α () i = P( o... o, q = s | λ)<br />
(7.3)<br />
t 1 t t i<br />
© A.G.Billard 2004 – Last Update March 2011