Advanced Deep Learning with Keras
Chapter 3# Mean Square Error (MSE) loss function, Adam optimizerautoencoder.compile(loss='mse', optimizer='adam')# train the autoencoderautoencoder.fit(x_train_noisy,x_train,validation_data=(x_test_noisy, x_test),epochs=10,batch_size=batch_size)# predict the autoencoder output from corrupted test imagesx_decoded = autoencoder.predict(x_test_noisy)# 3 sets of images with 9 MNIST digits# 1st rows - original images# 2nd rows - images corrupted by noise# 3rd rows - denoised imagesrows, cols = 3, 9num = rows * colsimgs = np.concatenate([x_test[:num], x_test_noisy[:num], x_decoded[:num]])imgs = imgs.reshape((rows * 3, cols, image_size, image_size))imgs = np.vstack(np.split(imgs, rows, axis=1))imgs = imgs.reshape((rows * 3, -1, image_size, image_size))imgs = np.vstack([np.hstack(i) for i in imgs])imgs = (imgs * 255).astype(np.uint8)plt.figure()plt.axis('off')plt.title('Original images: top rows, ''Corrupted Input: middle rows, ''Denoised Input: third rows')plt.imshow(imgs, interpolation='none', cmap='gray')Image.fromarray(imgs).save('corrupted_and_denoised.png')plt.show()Automatic colorization autoencoderWe're now going to work on another practical application of autoencoders. In thiscase, we're going to imagine that we have a grayscale photo and that we want tobuild a tool that will automatically add color to them. We would like to replicate thehuman abilities in identifying that the sea and sky are blue, the grass field and treesare green, while clouds are white, and so on.[ 89 ]
AutoencodersAs shown in Figure 3.4.1, if we are given a grayscale photo of a rice field on theforeground, a volcano in the background and sky on top, we're able to add theappropriate colors.Figure 3.4.1: Adding color to a grayscale photo of the Mayon Volcano. A colorization network should replicatehuman abilities by adding color to a grayscale photo. Left photo is grayscale. The right photo is color. Originalcolor photo can be found on the book GitHub repository, https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter3-autoencoders/README.md.A simple automatic colorization algorithm seems like a suitable problem forautoencoders. If we can train the autoencoder with a sufficient number of grayscalephotos as input and the corresponding colored photos as output, it could possiblydiscover the hidden structure on properly applying colors. Roughly, it is the reverseprocess of denoising. The question is, can an autoencoder add color (good noise)to the original grayscale image?Listing 3.4.1 shows the colorization autoencoder network. The colorizationautoencoder network is a modified version of denoising autoencoder that weused for the MNIST dataset. Firstly, we need a dataset of grayscale to coloredphotos. The CIFAR10 database, which we have used before, has 50,000 trainingand 10,000 testing 32 × 32 RGB photos that can be converted to grayscale. As shownin the following listing, we're able to use the rgb2gray() function to apply weightson R, G, and B components to convert from color to grayscale.Listing 3.4.1, colorization-autoencoder-cifar10-3.4.1.py, shows usa colorization autoencoder using the CIFAR10 dataset:from keras.layers import Dense, Inputfrom keras.layers import Conv2D, Flatten[ 90 ]
- Page 57 and 58: Deep Neural NetworksWhile this chap
- Page 59 and 60: Deep Neural Networks# reshape and n
- Page 61 and 62: Deep Neural NetworksEverything else
- Page 63 and 64: Deep Neural Networksfrom keras.util
- Page 65 and 66: Deep Neural NetworksFigure 2.1.3: T
- Page 67 and 68: Deep Neural NetworksHence, the netw
- Page 69 and 70: Deep Neural NetworksGenerally speak
- Page 71 and 72: Deep Neural NetworksIn the dataset,
- Page 73 and 74: Deep Neural NetworksTransition Laye
- Page 75 and 76: Deep Neural NetworksThere are some
- Page 77 and 78: Deep Neural NetworksResNet v2 is al
- Page 79 and 80: Deep Neural Networks…if version =
- Page 81 and 82: Deep Neural NetworksTo prevent the
- Page 83 and 84: Deep Neural NetworksAverage Pooling
- Page 85 and 86: Deep Neural Networks# orig paper us
- Page 88 and 89: AutoencodersIn the previous chapter
- Page 90 and 91: Chapter 3The autoencoder has the te
- Page 92 and 93: Chapter 3Firstly, we're going to im
- Page 94 and 95: Chapter 3# reconstruct the inputout
- Page 96 and 97: Chapter 3Figure 3.2.2: The decoder
- Page 98 and 99: batch_size=32,model_name="autoencod
- Page 100 and 101: Chapter 3Figure 3.2.6: Digits gener
- Page 102 and 103: Chapter 3As shown in Figure 3.3.2,
- Page 104 and 105: Chapter 3image_size = x_train.shape
- Page 108 and 109: Chapter 3from keras.layers import R
- Page 110 and 111: Chapter 3# build the autoencoder mo
- Page 112 and 113: Chapter 3x_train,validation_data=(x
- Page 114: Chapter 3ConclusionIn this chapter,
- Page 117 and 118: Generative Adversarial Networks (GA
- Page 119 and 120: Generative Adversarial Networks (GA
- Page 121 and 122: Generative Adversarial Networks (GA
- Page 123 and 124: Generative Adversarial Networks (GA
- Page 125 and 126: Generative Adversarial Networks (GA
- Page 127 and 128: Generative Adversarial Networks (GA
- Page 129 and 130: Generative Adversarial Networks (GA
- Page 131 and 132: Generative Adversarial Networks (GA
- Page 133 and 134: Generative Adversarial Networks (GA
- Page 135 and 136: Generative Adversarial Networks (GA
- Page 137 and 138: Generative Adversarial Networks (GA
- Page 139 and 140: Generative Adversarial Networks (GA
- Page 141 and 142: Generative Adversarial Networks (GA
- Page 143 and 144: Improved GANsIn summary, the goal o
- Page 145 and 146: Improved GANsThe intuition behind E
- Page 147 and 148: Improved GANsThis makes sense since
- Page 149 and 150: Improved GANsIn the context of GANs
- Page 151 and 152: Improved GANsFigure 5.1.3: Top: Tra
- Page 153 and 154: Improved GANsThe functions include:
- Page 155 and 156: Improved GANsmodels = (generator, d
Chapter 3
# Mean Square Error (MSE) loss function, Adam optimizer
autoencoder.compile(loss='mse', optimizer='adam')
# train the autoencoder
autoencoder.fit(x_train_noisy,
x_train,
validation_data=(x_test_noisy, x_test),
epochs=10,
batch_size=batch_size)
# predict the autoencoder output from corrupted test images
x_decoded = autoencoder.predict(x_test_noisy)
# 3 sets of images with 9 MNIST digits
# 1st rows - original images
# 2nd rows - images corrupted by noise
# 3rd rows - denoised images
rows, cols = 3, 9
num = rows * cols
imgs = np.concatenate([x_test[:num], x_test_noisy[:num], x_
decoded[:num]])
imgs = imgs.reshape((rows * 3, cols, image_size, image_size))
imgs = np.vstack(np.split(imgs, rows, axis=1))
imgs = imgs.reshape((rows * 3, -1, image_size, image_size))
imgs = np.vstack([np.hstack(i) for i in imgs])
imgs = (imgs * 255).astype(np.uint8)
plt.figure()
plt.axis('off')
plt.title('Original images: top rows, '
'Corrupted Input: middle rows, '
'Denoised Input: third rows')
plt.imshow(imgs, interpolation='none', cmap='gray')
Image.fromarray(imgs).save('corrupted_and_denoised.png')
plt.show()
Automatic colorization autoencoder
We're now going to work on another practical application of autoencoders. In this
case, we're going to imagine that we have a grayscale photo and that we want to
build a tool that will automatically add color to them. We would like to replicate the
human abilities in identifying that the sea and sky are blue, the grass field and trees
are green, while clouds are white, and so on.
[ 89 ]