pdfcoffee
[ 387 ]Chapter 10Colour mapping using SOMSome of the interesting properties of the feature map of the input space generatedby SOM are:• The feature map provides a good representation of the input space. Thisproperty can be used to perform vector quantization so that we may havea continuous input space, and using SOM we can represent it in a discreteoutput space.• The feature map is topologically ordered, that is, the spatial location of aneuron in the output lattice corresponds to a particular feature of the input.• The feature map also reflects the statistical distribution of the input space;the domain that has the largest number of input samples gets a wider areain the feature map.These features of SOM make them the natural choice for many interestingapplications. Here we use SOM for clustering a range of given R, G, and B pixelvalues to a corresponding color map. We start with the importing of modules:import tensorflow as tfimport numpy as npimport matplotlib.pyplot as pltThe main component of the code is our class WTU. The class __init__ functioninitializes various hyperparameters of our SOM, the dimensions of our 2D lattice(m, n), the number of features in the input (dim), the neighborhood radius (sigma),the initial weights, and the topographic information:# Define the Winner Take All unitsclass WTU(object):#_learned = Falsedef __init__(self, m, n, dim, num_iterations, eta = 0.5, sigma =None):"""m x n : The dimension of 2D lattice in which neuronsare arrangeddim : Dimension of input training datanum_iterations: Total number of training iterationseta : Learning ratesigma: The radius of neighbourhood function."""self._m = mself._n = n
Unsupervised Learningself._neighbourhood = []self._topography = []self._num_iterations = int(num_iterations)self._learned = Falseself.dim = dimself.eta = float(eta)if sigma is None:sigma = max(m,n)/2.0 # Constant radiuselse:sigma = float(sigma)self.sigma = sigmaprint('Network created with dimensions',m,n)# Weight Matrix and the topography of neuronsself._W = tf.random.normal([m*n, dim], seed = 0)self._topography = np.array(list(self._neuron_location(m, n)))The most important function of the class is the train() function, where we use theKohonen algorithm as discussed before to find the winner units and then update theweights based on the neighborhood function:def training(self,x, i):m = self._mn= self._n# Finding the Winner and its locationd = tf.sqrt(tf.reduce_sum(tf.pow(self._W - tf.stack([x for i inrange(m*n)]),2),1))self.WTU_idx = tf.argmin(d,0)slice_start = tf.pad(tf.reshape(self.WTU_idx, [1]),np.array([[0,1]]))self.WTU_loc = tf.reshape(tf.slice(self._topography, slice_start,[1,2]), [2])# Change learning rate and radius as a function of iterationslearning_rate = 1 - i/self._num_iterations_eta_new = self.eta * learning_rate_sigma_new = self.sigma * learning_rate[ 388 ]
- Page 371 and 372: Recurrent Neural Networks30 try to
- Page 373 and 374: Recurrent Neural Networks3. Because
- Page 375 and 376: Recurrent Neural NetworksSummaryIn
- Page 377 and 378: Recurrent Neural Networks18. Shi, X
- Page 380 and 381: AutoencodersAutoencoders are feed-f
- Page 382 and 383: Depending upon the actual dimension
- Page 384 and 385: • __init__(): Here, you define al
- Page 386 and 387: Chapter 9And then we reshape the te
- Page 388 and 389: Chapter 9plt.imshow(x_test[index].r
- Page 390 and 391: Chapter 9Keeping the rest of the co
- Page 392 and 393: noise = np.random.normal(loc=0.5, s
- Page 394 and 395: Chapter 9x_train,validation_data=(x
- Page 396 and 397: Chapter 9import matplotlib.pyplot a
- Page 398 and 399: Chapter 9self.conv4 = Conv2D(1, 3,
- Page 400 and 401: Chapter 9You can see that the image
- Page 402 and 403: [ 367 ]Chapter 9Let us use the prec
- Page 404 and 405: Chapter 9Our autoencoder model take
- Page 406 and 407: We train the autoencoder for 20 epo
- Page 408 and 409: Chapter 90.97905576229095460.989323
- Page 410 and 411: Unsupervised LearningThis chapter d
- Page 412 and 413: Chapter 10Next we load the MNIST da
- Page 414 and 415: Chapter 10TensorFlow Embedding APIT
- Page 416 and 417: 3. Recompute the centroids using cu
- Page 418 and 419: Chapter 10Figure 4: Plot of the fin
- Page 420 and 421: Chapter 10In SOMs, neurons are usua
- Page 424 and 425: Chapter 10# Calculating Neighbourho
- Page 426 and 427: We will also need to normalize the
- Page 428 and 429: Chapter 10ρρ(vv oo |h oo ) = σσ
- Page 430 and 431: # Generate the sample probabilityde
- Page 432 and 433: Chapter 10And the reconstructed ima
- Page 434 and 435: Chapter 10inpX = rbm.rbm_output(inp
- Page 436 and 437: Chapter 10(60000, 28, 28) (60000,)(
- Page 438 and 439: Chapter 10Figure 11: Summary of the
- Page 440 and 441: Chapter 10This chapter, along with
- Page 442 and 443: Reinforcement LearningThis chapter
- Page 444 and 445: Chapter 11And unlike unsupervised l
- Page 446 and 447: Chapter 11Normally, the value is de
- Page 448 and 449: Chapter 11• The next question tha
- Page 450 and 451: Chapter 11This neural network takes
- Page 452 and 453: Chapter 11The MuJoCo environment re
- Page 454 and 455: Chapter 11We will first import the
- Page 456 and 457: Chapter 11The αα is the learning
- Page 458 and 459: Chapter 11We set up the global valu
- Page 460 and 461: Chapter 11else:return np.argmax(sel
- Page 462 and 463: Chapter 11DQN to play a game of Ata
- Page 464 and 465: Chapter 11self.model.add( Conv2D(64
- Page 466 and 467: Chapter 11Here the action A was sel
- Page 468 and 469: Chapter 11Image source: https://arx
- Page 470 and 471: Chapter 11A neural network is used
[ 387 ]
Chapter 10
Colour mapping using SOM
Some of the interesting properties of the feature map of the input space generated
by SOM are:
• The feature map provides a good representation of the input space. This
property can be used to perform vector quantization so that we may have
a continuous input space, and using SOM we can represent it in a discrete
output space.
• The feature map is topologically ordered, that is, the spatial location of a
neuron in the output lattice corresponds to a particular feature of the input.
• The feature map also reflects the statistical distribution of the input space;
the domain that has the largest number of input samples gets a wider area
in the feature map.
These features of SOM make them the natural choice for many interesting
applications. Here we use SOM for clustering a range of given R, G, and B pixel
values to a corresponding color map. We start with the importing of modules:
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
The main component of the code is our class WTU. The class __init__ function
initializes various hyperparameters of our SOM, the dimensions of our 2D lattice
(m, n), the number of features in the input (dim), the neighborhood radius (sigma),
the initial weights, and the topographic information:
# Define the Winner Take All units
class WTU(object):
#_learned = False
def __init__(self, m, n, dim, num_iterations, eta = 0.5, sigma =
None):
"""
m x n : The dimension of 2D lattice in which neurons
are arranged
dim : Dimension of input training data
num_iterations: Total number of training iterations
eta : Learning rate
sigma: The radius of neighbourhood function.
"""
self._m = m
self._n = n