Advanced Deep Learning with Keras
Chapter 4Figure 4.3.4: The fake images generated by CGAN at different training steps whenconditioned with labels [0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5]You're encouraged to run the trained generator model to see new synthesizedMNIST digits images:python3 cgan-mnist-4.3.1.py --generator=cgan_mnist.h5Alternatively, a specific digit (for example, 8) to be generated can also be requested:cgan-mnist-4.3.1.py --generator=cgan_mnist.h5 --digit=8With CGAN it's like having an agent that we can ask to draw digits similarto how humans write digits. The key advantage of CGAN over DCGAN is thatwe can specify which digit we want the agent to draw.[ 123 ]
Generative Adversarial Networks (GANs)ConclusionThis chapter discussed the general principles behind GANs, to give you a foundationto the more advanced topics we'll now move on to, including Improved GANs,Disentangled Representations GANs, and Cross-Doman GANs. We started thischapter by understanding how GANs are made up of two networks called generatorand discriminator. The role of the discriminator is to discriminate between realand fake signals. The aim of the generator is to fool the discriminator. The generatoris normally combined with the discriminator to form an adversarial network. It isthrough training the adversarial network that the generator learns how to producefake signals that can trick the discriminator.We also learned how GANs are easy to build but notoriously difficult to train.Two example implementations in Keras were presented. DCGAN demonstratedthat it is possible to train GANs to generate fake images using deep CNNs. Thefake images are MNIST digits. However, the DCGAN generator has no control overwhich specific digit it should draw. CGAN addressed this problem by conditioningthe generator to draw a specific digit. The condition is in the form of a one-hot label.CGAN is useful if we want to build an agent that can generate data of a specific class.In the next chapter, improvements on the DCGAN and CGAN will be introduced.In particular, the focus is on how to stabilize the training of DCGAN and how toimprove the perceptive quality of CGAN. This will be done by introducing newloss functions and slightly different model architectures.References1. Ian Goodfellow. NIPS 2016 Tutorial: Generative Adversarial Networks. arXivpreprint arXiv:1701.00160, 2016 (https://arxiv.org/pdf/1701.00160.pdf).2. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised RepresentationLearning with Deep Convolutional Generative Adversarial Networks. arXivpreprint arXiv:1511.06434, 2015 (https://arxiv.org/pdf/1511.06434.pdf).3. Mehdi Mirza and Simon Osindero. Conditional Generative Adversarial Nets.arXiv preprint arXiv:1411.1784, 2014 (https://arxiv.org/pdf/1411.1784.pdf).4. Tero Karras and others. Progressive Growing of GANs for Improved Quality,Stability, and Variation. ICLR, 2018 (https://arxiv.org/pdf/1710.10196.pdf).[ 124 ]
- Page 90 and 91: Chapter 3The autoencoder has the te
- Page 92 and 93: Chapter 3Firstly, we're going to im
- Page 94 and 95: Chapter 3# reconstruct the inputout
- Page 96 and 97: Chapter 3Figure 3.2.2: The decoder
- Page 98 and 99: batch_size=32,model_name="autoencod
- Page 100 and 101: Chapter 3Figure 3.2.6: Digits gener
- Page 102 and 103: Chapter 3As shown in Figure 3.3.2,
- Page 104 and 105: Chapter 3image_size = x_train.shape
- Page 106 and 107: Chapter 3# Mean Square Error (MSE)
- Page 108 and 109: Chapter 3from keras.layers import R
- Page 110 and 111: Chapter 3# build the autoencoder mo
- Page 112 and 113: Chapter 3x_train,validation_data=(x
- Page 114: Chapter 3ConclusionIn this chapter,
- Page 117 and 118: Generative Adversarial Networks (GA
- Page 119 and 120: Generative Adversarial Networks (GA
- Page 121 and 122: Generative Adversarial Networks (GA
- Page 123 and 124: Generative Adversarial Networks (GA
- Page 125 and 126: Generative Adversarial Networks (GA
- Page 127 and 128: Generative Adversarial Networks (GA
- Page 129 and 130: Generative Adversarial Networks (GA
- Page 131 and 132: Generative Adversarial Networks (GA
- Page 133 and 134: Generative Adversarial Networks (GA
- Page 135 and 136: Generative Adversarial Networks (GA
- Page 137 and 138: Generative Adversarial Networks (GA
- Page 139: Generative Adversarial Networks (GA
- Page 143 and 144: Improved GANsIn summary, the goal o
- Page 145 and 146: Improved GANsThe intuition behind E
- Page 147 and 148: Improved GANsThis makes sense since
- Page 149 and 150: Improved GANsIn the context of GANs
- Page 151 and 152: Improved GANsFigure 5.1.3: Top: Tra
- Page 153 and 154: Improved GANsThe functions include:
- Page 155 and 156: Improved GANsmodels = (generator, d
- Page 157 and 158: Improved GANsfor layer in discrimin
- Page 159 and 160: Improved GANsFollowing figure shows
- Page 161 and 162: Improved GANsThe preceding table sh
- Page 163 and 164: Improved GANsFollowing figure shows
- Page 165 and 166: Improved GANsEssentially, in CGAN w
- Page 167 and 168: Improved GANslayer = Dense(layer_fi
- Page 169 and 170: Improved GANsx = BatchNormalization
- Page 171 and 172: Improved GANsdiscriminator.compile(
- Page 173 and 174: Improved GANssize=batch_size)real_i
- Page 175 and 176: Improved GANsUnlike CGAN, the sampl
- Page 177 and 178: Improved GANsConclusionIn this chap
- Page 179 and 180: Disentangled Representation GANsIn
- Page 181 and 182: Disentangled Representation GANsInf
- Page 183 and 184: Disentangled Representation GANsFol
- Page 185 and 186: Disentangled Representation GANs# A
- Page 187 and 188: Disentangled Representation GANsif
- Page 189 and 190: Disentangled Representation GANsLis
Generative Adversarial Networks (GANs)
Conclusion
This chapter discussed the general principles behind GANs, to give you a foundation
to the more advanced topics we'll now move on to, including Improved GANs,
Disentangled Representations GANs, and Cross-Doman GANs. We started this
chapter by understanding how GANs are made up of two networks called generator
and discriminator. The role of the discriminator is to discriminate between real
and fake signals. The aim of the generator is to fool the discriminator. The generator
is normally combined with the discriminator to form an adversarial network. It is
through training the adversarial network that the generator learns how to produce
fake signals that can trick the discriminator.
We also learned how GANs are easy to build but notoriously difficult to train.
Two example implementations in Keras were presented. DCGAN demonstrated
that it is possible to train GANs to generate fake images using deep CNNs. The
fake images are MNIST digits. However, the DCGAN generator has no control over
which specific digit it should draw. CGAN addressed this problem by conditioning
the generator to draw a specific digit. The condition is in the form of a one-hot label.
CGAN is useful if we want to build an agent that can generate data of a specific class.
In the next chapter, improvements on the DCGAN and CGAN will be introduced.
In particular, the focus is on how to stabilize the training of DCGAN and how to
improve the perceptive quality of CGAN. This will be done by introducing new
loss functions and slightly different model architectures.
References
1. Ian Goodfellow. NIPS 2016 Tutorial: Generative Adversarial Networks. arXiv
preprint arXiv:1701.00160, 2016 (https://arxiv.org/pdf/1701.00160.
pdf).
2. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised Representation
Learning with Deep Convolutional Generative Adversarial Networks. arXiv
preprint arXiv:1511.06434, 2015 (https://arxiv.org/pdf/1511.06434.
pdf).
3. Mehdi Mirza and Simon Osindero. Conditional Generative Adversarial Nets.
arXiv preprint arXiv:1411.1784, 2014 (https://arxiv.org/pdf/1411.1784.
pdf).
4. Tero Karras and others. Progressive Growing of GANs for Improved Quality,
Stability, and Variation. ICLR, 2018 (https://arxiv.org/pdf/1710.10196.
pdf).
[ 124 ]