09.05.2023 Views

pdfcoffee

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Unsupervised Learning

To decide which neighboring neurons need to be modified, the network uses

a neighborhood function ∧ (rr) ; normally, the Gaussian Mexican hat function is

chosen as a neighborhood function. The neighborhood function is mathematically

represented as follows:

∧ (rr) = ee − dd2

2σσ 2

Here, σσ is a time-dependent radius of influence of a neuron and d is its distance from

the winning neuron. Graphically the function looks like a hat (hence its name), as

you can see in the following figure:

Figure 6: The "Gaussian Maxican hat" function, visualized in graph form

Another important property of the neighborhood function is that its radius reduces

with time. As a result, in the beginning, many neighboring neurons' weights are

modified, but as the network learns, eventually a few neurons' weights (at times,

only one or none) are modified in the learning process. The change in weight is given

by the following equation:

dddd = ηη ∧ (XX − WW)

The process is repeated for all the inputs for a given number of iterations. As the

iterations progress, we reduce the learning rate and the radius by a factor dependent

on the iteration number.

SOMs are computationally expensive and thus are not really useful for very

large datasets. Still, they are easy to understand, and they can very nicely find

the similarity between input data. Thus, they have been employed for image

segmentation and to determine word similarity maps in NLP [3].

[ 386 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!