Advanced Deep Learning with Keras
Chapter 3Figure 3.2.6: Digits generated as the 2-dim latent vector space is navigatedIn Figure 3.2.5, we'll be able to see that the latent codes for a specific digit areclustering on a region in space. For example, digit 0 is on the lower left quadrant,while digit 1 is on the upper right quadrant. Such clustering is mirrored in Figure3.2.6. In fact, the same figure shows the result of navigating or generating newdigits from the latent space as shown in the Figure 3.2.5.For example, starting from the center and varying the value of a 2-dim latent vectortowards the lower left quadrant, shows us that the digit changes from 2 to 0. Thisis expected since from Figure 3.2.5, we're able to see that the codes for the digit 2clusters are near the center, and as discussed digit 0 codes cluster in the lower leftquadrant. For Figure 3.2.6, we've only explored the regions between -4.0 and +4.0for each latent dimension.[ 83 ]
AutoencodersAs can be seen in Figure 3.2.5, the latent code distribution is not continuous andranges beyond ± 4.0 . Ideally, it should look like a circle where there are valid valueseverywhere. Because of this discontinuity, there are regions where if we decode thelatent vector, hardly recognizable digits will be produced.Denoising autoencoder (DAE)We're now going to build an autoencoder with a practical application. Firstly,let's paint a picture and imagine that the MNIST digits images were corrupted bynoise, thus making it harder for humans to read. We're able to build a DenoisingAutoencoder (DAE) to remove the noise from these images. Figure 3.3.1 shows usthree sets of MNIST digits. The top rows of each set (for example, MNIST digits 7, 2,1, 9, 0, 6, 3, 4, 9) are the original images. The middle rows show the inputs to DAE,which are the original images corrupted by noise. The last rows show the outputsof DAE:Figure 3.3.1: Original MNIST digits (top rows),corrupted original images (middle rows) and denoised images (last rows)Figure 3.3.2: The input to the denoising autoencoder is the corrupted image.The output is the clean or denoised image. The latent vector is assumed to be 16-dim.[ 84 ]
- Page 50 and 51: Chapter 1There are the two main dif
- Page 52 and 53: Chapter 1Layers Optimizer Regulariz
- Page 54: ConclusionThis chapter provided an
- Page 57 and 58: Deep Neural NetworksWhile this chap
- Page 59 and 60: Deep Neural Networks# reshape and n
- Page 61 and 62: Deep Neural NetworksEverything else
- Page 63 and 64: Deep Neural Networksfrom keras.util
- Page 65 and 66: Deep Neural NetworksFigure 2.1.3: T
- Page 67 and 68: Deep Neural NetworksHence, the netw
- Page 69 and 70: Deep Neural NetworksGenerally speak
- Page 71 and 72: Deep Neural NetworksIn the dataset,
- Page 73 and 74: Deep Neural NetworksTransition Laye
- Page 75 and 76: Deep Neural NetworksThere are some
- Page 77 and 78: Deep Neural NetworksResNet v2 is al
- Page 79 and 80: Deep Neural Networks…if version =
- Page 81 and 82: Deep Neural NetworksTo prevent the
- Page 83 and 84: Deep Neural NetworksAverage Pooling
- Page 85 and 86: Deep Neural Networks# orig paper us
- Page 88 and 89: AutoencodersIn the previous chapter
- Page 90 and 91: Chapter 3The autoencoder has the te
- Page 92 and 93: Chapter 3Firstly, we're going to im
- Page 94 and 95: Chapter 3# reconstruct the inputout
- Page 96 and 97: Chapter 3Figure 3.2.2: The decoder
- Page 98 and 99: batch_size=32,model_name="autoencod
- Page 102 and 103: Chapter 3As shown in Figure 3.3.2,
- Page 104 and 105: Chapter 3image_size = x_train.shape
- Page 106 and 107: Chapter 3# Mean Square Error (MSE)
- Page 108 and 109: Chapter 3from keras.layers import R
- Page 110 and 111: Chapter 3# build the autoencoder mo
- Page 112 and 113: Chapter 3x_train,validation_data=(x
- Page 114: Chapter 3ConclusionIn this chapter,
- Page 117 and 118: Generative Adversarial Networks (GA
- Page 119 and 120: Generative Adversarial Networks (GA
- Page 121 and 122: Generative Adversarial Networks (GA
- Page 123 and 124: Generative Adversarial Networks (GA
- Page 125 and 126: Generative Adversarial Networks (GA
- Page 127 and 128: Generative Adversarial Networks (GA
- Page 129 and 130: Generative Adversarial Networks (GA
- Page 131 and 132: Generative Adversarial Networks (GA
- Page 133 and 134: Generative Adversarial Networks (GA
- Page 135 and 136: Generative Adversarial Networks (GA
- Page 137 and 138: Generative Adversarial Networks (GA
- Page 139 and 140: Generative Adversarial Networks (GA
- Page 141 and 142: Generative Adversarial Networks (GA
- Page 143 and 144: Improved GANsIn summary, the goal o
- Page 145 and 146: Improved GANsThe intuition behind E
- Page 147 and 148: Improved GANsThis makes sense since
- Page 149 and 150: Improved GANsIn the context of GANs
Autoencoders
As can be seen in Figure 3.2.5, the latent code distribution is not continuous and
ranges beyond ± 4.0 . Ideally, it should look like a circle where there are valid values
everywhere. Because of this discontinuity, there are regions where if we decode the
latent vector, hardly recognizable digits will be produced.
Denoising autoencoder (DAE)
We're now going to build an autoencoder with a practical application. Firstly,
let's paint a picture and imagine that the MNIST digits images were corrupted by
noise, thus making it harder for humans to read. We're able to build a Denoising
Autoencoder (DAE) to remove the noise from these images. Figure 3.3.1 shows us
three sets of MNIST digits. The top rows of each set (for example, MNIST digits 7, 2,
1, 9, 0, 6, 3, 4, 9) are the original images. The middle rows show the inputs to DAE,
which are the original images corrupted by noise. The last rows show the outputs
of DAE:
Figure 3.3.1: Original MNIST digits (top rows),
corrupted original images (middle rows) and denoised images (last rows)
Figure 3.3.2: The input to the denoising autoencoder is the corrupted image.
The output is the clean or denoised image. The latent vector is assumed to be 16-dim.
[ 84 ]