MACHINE LEARNING TECHNIQUES - LASA
MACHINE LEARNING TECHNIQUES - LASA MACHINE LEARNING TECHNIQUES - LASA
128 variance of the output, e.g. by increasing the weights. Note that, by doing this, you increase the unpredictability of the neuron’s output, which can have disastrous consequences in some applications. Noise on the Inputs In most applications, you will have to face noise in the input, with different type of noise depending on the input. In such a scenario, the output of a neuron would be described by the following: ( ) ∑ j j j (6.7) j y = w x + ν One can show that the mutual information between input and output in this case becomes: 2 1 σ y I( x, y) = log 2 ⎛⎛ ⎞⎞ 2 σ v ⎜⎜∑ wj⎟⎟ ⎝⎝ j ⎠⎠ 2 (6.8) In this case, it is not sufficient to just increase the weights, since, by doing so, one will also increase the amount on the denominator. More sophisticated techniques must be used on a neuron-by-neuron basis. More than one output neuron Imagine a two input-output scenario in which the two outputs attempt to jointly convey as much information as possible on the two inputs. In this case, each output’s neuron’s activation is given by: ( ) ∑ (6.9) y = w x + ν i ij j i j Similarly to the 1-output case, we can assume that the noise terms are uncorrelated and Gaussian and we can write: 2 ( ν) = ( ν1, ν2) = ( ν1) + ( ν2) = 1+ 2log( 2πσ ) h h h h ν Since the output neurons are both dependent on the same two inputs, they are correlated. One can calculate the correlation matrix R as: T ⎛⎛y1 ⎞⎞ ⎛⎛r11 r12 ⎞⎞ R= E( yy ) = ⎜⎜ ⎟⎟( y1 y2) = ⎜⎜ ⎟⎟ y r r ⎝⎝ 2⎠⎠ ⎝⎝ 21 22⎠⎠ (6.10) One can show that the mutual information is equal to © A.G.Billard 2004 – Last Update March 2011
129 ( R) ⎛⎛det ⎞⎞ I( x, y) = log⎜⎜ 2 ⎟⎟ ⎝⎝ σ ν ⎠⎠ (6.11) 2 Again, the variance of the noise σ ν is fixed and, so, to maximize mutual information, we must maximize 4 2 2 2 2 2 2 ( R) = r11r22 − r12r21 = σν + σν ( σ1 + σ2 ) + σ1 σ2 ( − ρ12 ) det 2 1 σ 2 , i = 1,2 1 is the variance of each output neuron in the absence of noise and 12 correlation coefficient of the output signals also in absence of noise. One can thus consider the two following situations: ρ is the Large noise variance : If σ is large, one can ignore the 3 rd term of the equation. It remains, thus, to maximize the sum ν 2 2 ( σ1 σ2) + , which can be done by maximizing the variance of either neuron independently. Low noise variance: If, on the contrary, the variance is very small, the 3 rd term becomes more important than the two first ones. In that case, one must find a tradeoff between maximizing the variance on each output neuron while keeping the correlation factor sufficiently small. In other words, in a low noise situation, it is best to use a network in which each neuron’s output is de-correlated from one another, i.e. where each output neuron conveys a different information about the inputs; while in a high noise situation, it is best to have a high redundancy in the output. This way one gives more chances for the information conveyed in the input to be appropriately transferred to the output. 6.4 The Backpropagation Learning Rule In Section 6.3.1, we have seen the perceptron learning rule. Here, we will see a general supervised learning rule for a multi-layer perceptron neural network, called Backpropagation. Backpropagation is part of objective functions, a class of functions that minimize an error as the criterion for optimization. Such functions are said to perform a gradient descent type of optimization, see Section 9.4.1. Error descent methods are usually associated with supervised learning methods, in which we must provide the network with a set of example data and the answer we expect the network to give is presented together with the data. © A.G.Billard 2004 – Last Update March 2011
- Page 77 and 78: 77 M 1 T v = ∑ x ( x ) v M λ i j
- Page 79 and 80: 79 1 M The solutions to the dual ei
- Page 81 and 82: 81 5.4 Kernel CCA The linear versio
- Page 83 and 84: 83 additional ridge parameter induc
- Page 85 and 86: 85 Figure 5-3: TOP: Marginal (left)
- Page 87 and 88: 87 statistical independence. We def
- Page 89 and 90: 89 J j ( µ 1,...., µ K) = ∑∑
- Page 91 and 92: 91 A simple pattern recognition alg
- Page 93 and 94: 93 ( ) ( , ) f x = sign w x + b (5.
- Page 95 and 96: 95 Figure 5-6: A binary classificat
- Page 97 and 98: 97 where N is the number of support
- Page 99 and 100: 99 5.8 Support Vector Regression In
- Page 101 and 102: 101 The optimization problem given
- Page 103 and 104: 103 Note that since we never have t
- Page 105 and 106: 105 Figure 5-13: Effect of the kern
- Page 107 and 108: 107 To better understand the effect
- Page 109 and 110: 109 5.9 Gaussian Process Regression
- Page 111 and 112: 111 One can then use the above expr
- Page 113 and 114: 113 5.9.2 Equivalence of Gaussian P
- Page 115 and 116: 115 5.9.3 Curse of dimensionality,
- Page 117 and 118: 117 The weight w determines the slo
- Page 119 and 120: 119 Figure 5-21: Example of success
- Page 121 and 122: 121 • its performance tends to de
- Page 123 and 124: 123 neurons. Furthermore, they lear
- Page 125 and 126: 125 The sigmoid f x ( x) ( ) = tanh
- Page 127: 127 6.3.2 Information Theory and th
- Page 131 and 132: 131 y= ∑ w x ) of Because of the
- Page 133 and 134: 133 6.5 Willshaw net David Willshaw
- Page 135 and 136: 135 6.6.1 Weights bounds One of the
- Page 137 and 138: 137 Figure 6-11: The weight vector
- Page 139 and 140: 139 6.6.4 Oja’s one Neuron Model
- Page 141 and 142: 141 If y i and y are highly correla
- Page 143 and 144: 143 Foldiak’s second model allows
- Page 145 and 146: 145 ∂ ∂ J 1 = fy − 1 2 λ 1yf
- Page 147 and 148: 147 6.8 The Self-Organizing Map (SO
- Page 149 and 150: 149 6. Decrease the size of the nei
- Page 151 and 152: 151 the resulting distribution is a
- Page 153 and 154: 153 To simplify the description of
- Page 155 and 156: 155 C µν −1 where ( ) is the µ
- Page 157 and 158: 157 The continuous time Hopfield ne
- Page 159 and 160: 159 ∂f If the slope is negative,
- Page 161 and 162: 161 7.2 Hidden Markov Models Hidden
- Page 163 and 164: 163 Figure 7-2: Schematic illustrat
- Page 165 and 166: 165 these two quantities to compute
- Page 167 and 168: 167 7.2.4 Decoding an HMM There are
- Page 169 and 170: 169 7.2.5 Further Readings Rabiner,
- Page 171 and 172: 171 7.3.1 Principle In reinforcemen
- Page 173 and 174: 173 general, acting to maximize imm
- Page 175 and 176: 175 of reinforcement learning makes
- Page 177 and 178: 177 8 Genetic Algorithms We conclud
129<br />
( R)<br />
⎛⎛det<br />
⎞⎞<br />
I( x, y)<br />
= log⎜⎜ 2 ⎟⎟<br />
⎝⎝ σ ν ⎠⎠<br />
(6.11)<br />
2<br />
Again, the variance of the noise σ<br />
ν<br />
is fixed and, so, to maximize mutual information, we must<br />
maximize<br />
4 2 2 2 2 2 2<br />
( R) = r11r22 − r12r21 = σν + σν<br />
( σ1 + σ2 ) + σ1 σ2 ( − ρ12<br />
)<br />
det 2 1<br />
σ 2<br />
, i = 1,2<br />
1 is the variance of each output neuron in the absence of noise and 12<br />
correlation coefficient of the output signals also in absence of noise.<br />
One can thus consider the two following situations:<br />
ρ is the<br />
Large noise variance :<br />
If σ is large, one can ignore the 3 rd term of the equation. It remains, thus, to maximize the sum<br />
ν<br />
2 2<br />
( σ1 σ2)<br />
+ , which can be done by maximizing the variance of either neuron independently.<br />
Low noise variance:<br />
If, on the contrary, the variance is very small, the 3 rd term becomes more important than the two<br />
first ones. In that case, one must find a tradeoff between maximizing the variance on each output<br />
neuron while keeping the correlation factor sufficiently small.<br />
In other words, in a low noise situation, it is best to use a network in which each neuron’s output<br />
is de-correlated from one another, i.e. where each output neuron conveys a different information<br />
about the inputs; while in a high noise situation, it is best to have a high redundancy in the output.<br />
This way one gives more chances for the information conveyed in the input to be appropriately<br />
transferred to the output.<br />
6.4 The Backpropagation Learning Rule<br />
In Section 6.3.1, we have seen the perceptron learning rule. Here, we will see a general<br />
supervised learning rule for a multi-layer perceptron neural network, called Backpropagation.<br />
Backpropagation is part of objective functions, a class of functions that minimize an error as the<br />
criterion for optimization. Such functions are said to perform a gradient descent type of<br />
optimization, see Section 9.4.1.<br />
Error descent methods are usually associated with supervised learning methods, in which we<br />
must provide the network with a set of example data and the answer we expect the network to<br />
give is presented together with the data.<br />
© A.G.Billard 2004 – Last Update March 2011