MACHINE LEARNING TECHNIQUES - LASA

MACHINE LEARNING TECHNIQUES - LASA MACHINE LEARNING TECHNIQUES - LASA

01.11.2014 Views

146 Notice that, if we use the identity function for f and g, we find again the classical Hebbian rule. Recall that, if the two variables are uncorrelated, we have { 1 2} independent we have E( f ( y ) f ( y )) E( f ( y )) E( f ( y )) 1 2 1 2 E y , y = 0, and that if they are = for any given function f. The network must, thus, converge to a solution that satisfies the later condition. Figure 6-13: ICA with anti-Hebbian learning applied to two images that have been mixed together. After a number of iterations the network converges to a correct separation of the two source images. [DEMOS\ICA\ICA_IMAGE_MIX.M] © A.G.Billard 2004 – Last Update March 2011

147 6.8 The Self-Organizing Map (SOM) The SOM is an algorithm used to visualize and interpret large high-dimensional data sets. Typical applications are visualization of process states or financial results by representing the central dependencies within the data on the map. It is a way of reducing the dimensionality of a dataset, by producing a map of usually 1 or 2 dimensions, which plot the similarities of the data by grouping similar data items together. The map consists of a regular grid of processing units, "neurons". A model of some multidimensional observation, eventually a vector consisting of features, is associated with each unit. The map attempts to represent all the available observations with optimal accuracy using a restricted set of models. At the same time the models become ordered on the grid so that similar models are close to each other and dissimilar models far from each other. 6.8.1 Kohonen Network Kohonen's SOM is called a topology-preserving map because there is a topological structure imposed on the nodes in the network. A topological map is simply a mapping that preserves neighborhood relations. In the networks we have considered so far, we have ignored the geometrical arrangements of output nodes. Each node in a given layer has been identical in that each is connected with all of the nodes in the upper and/or lower layer. We are now going to take into consideration that physical arrangement of these nodes. Nodes that are "close" together are going to interact differently than nodes that are "far" apart. What do we mean by "close" and "far"? We can think of organizing the output nodes in a line or in a planar configuration. The goal is to train the net so that nearby outputs correspond to nearby inputs. E.g. if x 1 and x 2 are two input vectors and z 1 and z 2 are the locations of the corresponding winning output nodes, then z 1 and z 2 should be close if x 1 and x 2 are similar. A network that performs this kind of mapping is called a feature map. In the brain, neurons tend to cluster in groups. The connections within the group are much greater than the connections with the neurons outside of the group. Kohonen's network tries to mimic this in a simple way. © A.G.Billard 2004 – Last Update March 2011

147<br />

6.8 The Self-Organizing Map (SOM)<br />

The SOM is an algorithm used to visualize and interpret large high-dimensional data sets. Typical<br />

applications are visualization of process states or financial results by representing the central<br />

dependencies within the data on the map. It is a way of reducing the dimensionality of a dataset,<br />

by producing a map of usually 1 or 2 dimensions, which plot the similarities of the data by<br />

grouping similar data items together.<br />

The map consists of a regular grid of processing units, "neurons". A model of some<br />

multidimensional observation, eventually a vector consisting of features, is associated with each<br />

unit. The map attempts to represent all the available observations with optimal accuracy using a<br />

restricted set of models. At the same time the models become ordered on the grid so that similar<br />

models are close to each other and dissimilar models far from each other.<br />

6.8.1 Kohonen Network<br />

Kohonen's SOM is called a topology-preserving map because there is a topological structure<br />

imposed on the nodes in the network. A topological map is simply a mapping that preserves<br />

neighborhood relations.<br />

In the networks we have considered so far, we have ignored the geometrical arrangements of<br />

output nodes. Each node in a given layer has been identical in that each is connected with all of<br />

the nodes in the upper and/or lower layer. We are now going to take into consideration that<br />

physical arrangement of these nodes. Nodes that are "close" together are going to interact<br />

differently than nodes that are "far" apart.<br />

What do we mean by "close" and "far"? We can think of organizing the output nodes in a line or in<br />

a planar configuration.<br />

The goal is to train the net so that nearby outputs correspond to nearby inputs. E.g. if x<br />

1<br />

and x<br />

2<br />

are two input vectors and z<br />

1<br />

and z<br />

2<br />

are the locations of the corresponding winning output nodes,<br />

then z<br />

1<br />

and z<br />

2<br />

should be close if x<br />

1<br />

and x<br />

2<br />

are similar. A network that performs this kind of<br />

mapping is called a feature map.<br />

In the brain, neurons tend to cluster in groups. The connections within the group are much<br />

greater than the connections with the neurons outside of the group. Kohonen's network tries to<br />

mimic this in a simple way.<br />

© A.G.Billard 2004 – Last Update March 2011

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!