MACHINE LEARNING TECHNIQUES - LASA
MACHINE LEARNING TECHNIQUES - LASA MACHINE LEARNING TECHNIQUES - LASA
146 Notice that, if we use the identity function for f and g, we find again the classical Hebbian rule. Recall that, if the two variables are uncorrelated, we have { 1 2} independent we have E( f ( y ) f ( y )) E( f ( y )) E( f ( y )) 1 2 1 2 E y , y = 0, and that if they are = for any given function f. The network must, thus, converge to a solution that satisfies the later condition. Figure 6-13: ICA with anti-Hebbian learning applied to two images that have been mixed together. After a number of iterations the network converges to a correct separation of the two source images. [DEMOS\ICA\ICA_IMAGE_MIX.M] © A.G.Billard 2004 – Last Update March 2011
147 6.8 The Self-Organizing Map (SOM) The SOM is an algorithm used to visualize and interpret large high-dimensional data sets. Typical applications are visualization of process states or financial results by representing the central dependencies within the data on the map. It is a way of reducing the dimensionality of a dataset, by producing a map of usually 1 or 2 dimensions, which plot the similarities of the data by grouping similar data items together. The map consists of a regular grid of processing units, "neurons". A model of some multidimensional observation, eventually a vector consisting of features, is associated with each unit. The map attempts to represent all the available observations with optimal accuracy using a restricted set of models. At the same time the models become ordered on the grid so that similar models are close to each other and dissimilar models far from each other. 6.8.1 Kohonen Network Kohonen's SOM is called a topology-preserving map because there is a topological structure imposed on the nodes in the network. A topological map is simply a mapping that preserves neighborhood relations. In the networks we have considered so far, we have ignored the geometrical arrangements of output nodes. Each node in a given layer has been identical in that each is connected with all of the nodes in the upper and/or lower layer. We are now going to take into consideration that physical arrangement of these nodes. Nodes that are "close" together are going to interact differently than nodes that are "far" apart. What do we mean by "close" and "far"? We can think of organizing the output nodes in a line or in a planar configuration. The goal is to train the net so that nearby outputs correspond to nearby inputs. E.g. if x 1 and x 2 are two input vectors and z 1 and z 2 are the locations of the corresponding winning output nodes, then z 1 and z 2 should be close if x 1 and x 2 are similar. A network that performs this kind of mapping is called a feature map. In the brain, neurons tend to cluster in groups. The connections within the group are much greater than the connections with the neurons outside of the group. Kohonen's network tries to mimic this in a simple way. © A.G.Billard 2004 – Last Update March 2011
- Page 95 and 96: 95 Figure 5-6: A binary classificat
- Page 97 and 98: 97 where N is the number of support
- Page 99 and 100: 99 5.8 Support Vector Regression In
- Page 101 and 102: 101 The optimization problem given
- Page 103 and 104: 103 Note that since we never have t
- Page 105 and 106: 105 Figure 5-13: Effect of the kern
- Page 107 and 108: 107 To better understand the effect
- Page 109 and 110: 109 5.9 Gaussian Process Regression
- Page 111 and 112: 111 One can then use the above expr
- Page 113 and 114: 113 5.9.2 Equivalence of Gaussian P
- Page 115 and 116: 115 5.9.3 Curse of dimensionality,
- Page 117 and 118: 117 The weight w determines the slo
- Page 119 and 120: 119 Figure 5-21: Example of success
- Page 121 and 122: 121 • its performance tends to de
- Page 123 and 124: 123 neurons. Furthermore, they lear
- Page 125 and 126: 125 The sigmoid f x ( x) ( ) = tanh
- Page 127 and 128: 127 6.3.2 Information Theory and th
- Page 129 and 130: 129 ( R) ⎛⎛det ⎞⎞ I( x, y)
- Page 131 and 132: 131 y= ∑ w x ) of Because of the
- Page 133 and 134: 133 6.5 Willshaw net David Willshaw
- Page 135 and 136: 135 6.6.1 Weights bounds One of the
- Page 137 and 138: 137 Figure 6-11: The weight vector
- Page 139 and 140: 139 6.6.4 Oja’s one Neuron Model
- Page 141 and 142: 141 If y i and y are highly correla
- Page 143 and 144: 143 Foldiak’s second model allows
- Page 145: 145 ∂ ∂ J 1 = fy − 1 2 λ 1yf
- Page 149 and 150: 149 6. Decrease the size of the nei
- Page 151 and 152: 151 the resulting distribution is a
- Page 153 and 154: 153 To simplify the description of
- Page 155 and 156: 155 C µν −1 where ( ) is the µ
- Page 157 and 158: 157 The continuous time Hopfield ne
- Page 159 and 160: 159 ∂f If the slope is negative,
- Page 161 and 162: 161 7.2 Hidden Markov Models Hidden
- Page 163 and 164: 163 Figure 7-2: Schematic illustrat
- Page 165 and 166: 165 these two quantities to compute
- Page 167 and 168: 167 7.2.4 Decoding an HMM There are
- Page 169 and 170: 169 7.2.5 Further Readings Rabiner,
- Page 171 and 172: 171 7.3.1 Principle In reinforcemen
- Page 173 and 174: 173 general, acting to maximize imm
- Page 175 and 176: 175 of reinforcement learning makes
- Page 177 and 178: 177 8 Genetic Algorithms We conclud
- Page 179 and 180: 179 However, you must define geneti
- Page 181 and 182: 181 Often the crossover operator an
- Page 183 and 184: 183 ( A λI) x 0 − = (8.5) where
- Page 185 and 186: 185 Joint probability: The joint pr
- Page 187 and 188: 187 The two most classical distribu
- Page 189 and 190: 189 9.2.7 Statistical Independence
- Page 191 and 192: 191 1 1 1 1 h x ∫ a b a b 0 a a
- Page 193 and 194: 193 9.4 Estimators 9.4.1 Gradient d
- Page 195 and 196: 195 9.4.2.1 Maximum Likelihood Mach
147<br />
6.8 The Self-Organizing Map (SOM)<br />
The SOM is an algorithm used to visualize and interpret large high-dimensional data sets. Typical<br />
applications are visualization of process states or financial results by representing the central<br />
dependencies within the data on the map. It is a way of reducing the dimensionality of a dataset,<br />
by producing a map of usually 1 or 2 dimensions, which plot the similarities of the data by<br />
grouping similar data items together.<br />
The map consists of a regular grid of processing units, "neurons". A model of some<br />
multidimensional observation, eventually a vector consisting of features, is associated with each<br />
unit. The map attempts to represent all the available observations with optimal accuracy using a<br />
restricted set of models. At the same time the models become ordered on the grid so that similar<br />
models are close to each other and dissimilar models far from each other.<br />
6.8.1 Kohonen Network<br />
Kohonen's SOM is called a topology-preserving map because there is a topological structure<br />
imposed on the nodes in the network. A topological map is simply a mapping that preserves<br />
neighborhood relations.<br />
In the networks we have considered so far, we have ignored the geometrical arrangements of<br />
output nodes. Each node in a given layer has been identical in that each is connected with all of<br />
the nodes in the upper and/or lower layer. We are now going to take into consideration that<br />
physical arrangement of these nodes. Nodes that are "close" together are going to interact<br />
differently than nodes that are "far" apart.<br />
What do we mean by "close" and "far"? We can think of organizing the output nodes in a line or in<br />
a planar configuration.<br />
The goal is to train the net so that nearby outputs correspond to nearby inputs. E.g. if x<br />
1<br />
and x<br />
2<br />
are two input vectors and z<br />
1<br />
and z<br />
2<br />
are the locations of the corresponding winning output nodes,<br />
then z<br />
1<br />
and z<br />
2<br />
should be close if x<br />
1<br />
and x<br />
2<br />
are similar. A network that performs this kind of<br />
mapping is called a feature map.<br />
In the brain, neurons tend to cluster in groups. The connections within the group are much<br />
greater than the connections with the neurons outside of the group. Kohonen's network tries to<br />
mimic this in a simple way.<br />
© A.G.Billard 2004 – Last Update March 2011