Advanced Deep Learning with Keras
Generative AdversarialNetworks (GANs)In this chapter, we'll be investigating Generative Adversarial Networks (GANs)[1], the first of three artificial intelligence algorithms that we'll be looking at. GANsbelong to the family of generative models. However, unlike autoencoders, generativemodels are able to create new and meaningful outputs given arbitrary encodings.In this chapter, the working principles of GANs will be discussed. We'll also reviewthe implementations of several early GANs within Keras. While later on the chapter,we'll be demonstrating the techniques needed to achieve stable training. The scopeof this chapter covers two popular examples of GAN implementations, DeepConvolutional GAN (DCGAN) [2] and Conditional GAN (CGAN) [3].In summary, the goal of this chapter is to:• Introduce the principles of GANs• How to implement GANs such as DCGAN and CGAN in KerasAn overview of GANsBefore we move into the more advanced concepts of GANs, let's start by goingover GANs, and introducing the underlying concepts of them. GANs are verypowerful; this simple statement is proven by the fact that they can generate newcelebrity faces that are not of real people by performing latent space interpolations.[ 99 ]
Generative Adversarial Networks (GANs)A great example of the advanced features of GANs [4] can be seen with thisYouTube video (https://youtu.be/G06dEcZ-QTg). The video, which shows howGANs can be utilized to produce realistic faces just shows how powerful they canbe. This topic is much more advanced than anything we've looked at before in thisbook. For example, the above video is something that can't be accomplished easilyby autoencoders, which we covered in Chapter 3, Autoencoders.GANs are able to learn how to model the input distribution by training twocompeting (and cooperating) networks referred to as generator and discriminator(sometimes known as critic). The role of the generator is to keep on figuring outhow to generate fake data or signals (this includes, audio and images) that canfool the discriminator. Meanwhile, the discriminator is trained to distinguishbetween fake and real signals. As the training progresses, the discriminator willno longer be able to see the difference between the synthetically generated dataand the real ones. From there, the discriminator can be discarded, and the generatorcan now be used to create new realistic signals that have never been observed before.The underlying concept of GANs is straightforward. However, one thingwe'll find is that the most challenging aspect is how do we achieve stable trainingof the generator-discriminator network? There must be a healthy competitionbetween the generator and discriminator in order for both networks to be ableto learn simultaneously. Since the loss function is computed from the outputof the discriminator, its parameters update is fast. When the discriminatorconverges faster, the generator no longer receives sufficient gradient updates forits parameters and fails to converge. Other than being hard to train, GANs can alsosuffer from either a partial or total modal collapse, a situation wherein the generatoris producing almost similar outputs for different latent encodings.Principles of GANsAs shown in Figure 4.1.1 a GAN is analogous to a counterfeiter (generator) - police(discriminator) scenario. At the academy, the police are taught how to determine ifa dollar bill is either genuine or fake. Samples of real dollar bills from the bank andfake money from the counterfeiter are used to train the police. However, from timeto time, the counterfeiter will attempt to pretend that he printed real dollar bills.Initially, the police will not be fooled and will tell the counterfeiter why the moneyis fake. Taking into consideration this feedback, the counterfeiter hones his skillsagain and attempts to produce new fake dollar bills. As expected the police willbe able to both spot the money as fake and justify why the dollar bills are fake.[ 100 ]
- Page 65 and 66: Deep Neural NetworksFigure 2.1.3: T
- Page 67 and 68: Deep Neural NetworksHence, the netw
- Page 69 and 70: Deep Neural NetworksGenerally speak
- Page 71 and 72: Deep Neural NetworksIn the dataset,
- Page 73 and 74: Deep Neural NetworksTransition Laye
- Page 75 and 76: Deep Neural NetworksThere are some
- Page 77 and 78: Deep Neural NetworksResNet v2 is al
- Page 79 and 80: Deep Neural Networks…if version =
- Page 81 and 82: Deep Neural NetworksTo prevent the
- Page 83 and 84: Deep Neural NetworksAverage Pooling
- Page 85 and 86: Deep Neural Networks# orig paper us
- Page 88 and 89: AutoencodersIn the previous chapter
- Page 90 and 91: Chapter 3The autoencoder has the te
- Page 92 and 93: Chapter 3Firstly, we're going to im
- Page 94 and 95: Chapter 3# reconstruct the inputout
- Page 96 and 97: Chapter 3Figure 3.2.2: The decoder
- Page 98 and 99: batch_size=32,model_name="autoencod
- Page 100 and 101: Chapter 3Figure 3.2.6: Digits gener
- Page 102 and 103: Chapter 3As shown in Figure 3.3.2,
- Page 104 and 105: Chapter 3image_size = x_train.shape
- Page 106 and 107: Chapter 3# Mean Square Error (MSE)
- Page 108 and 109: Chapter 3from keras.layers import R
- Page 110 and 111: Chapter 3# build the autoencoder mo
- Page 112 and 113: Chapter 3x_train,validation_data=(x
- Page 114: Chapter 3ConclusionIn this chapter,
- Page 119 and 120: Generative Adversarial Networks (GA
- Page 121 and 122: Generative Adversarial Networks (GA
- Page 123 and 124: Generative Adversarial Networks (GA
- Page 125 and 126: Generative Adversarial Networks (GA
- Page 127 and 128: Generative Adversarial Networks (GA
- Page 129 and 130: Generative Adversarial Networks (GA
- Page 131 and 132: Generative Adversarial Networks (GA
- Page 133 and 134: Generative Adversarial Networks (GA
- Page 135 and 136: Generative Adversarial Networks (GA
- Page 137 and 138: Generative Adversarial Networks (GA
- Page 139 and 140: Generative Adversarial Networks (GA
- Page 141 and 142: Generative Adversarial Networks (GA
- Page 143 and 144: Improved GANsIn summary, the goal o
- Page 145 and 146: Improved GANsThe intuition behind E
- Page 147 and 148: Improved GANsThis makes sense since
- Page 149 and 150: Improved GANsIn the context of GANs
- Page 151 and 152: Improved GANsFigure 5.1.3: Top: Tra
- Page 153 and 154: Improved GANsThe functions include:
- Page 155 and 156: Improved GANsmodels = (generator, d
- Page 157 and 158: Improved GANsfor layer in discrimin
- Page 159 and 160: Improved GANsFollowing figure shows
- Page 161 and 162: Improved GANsThe preceding table sh
- Page 163 and 164: Improved GANsFollowing figure shows
Generative Adversarial
Networks (GANs)
In this chapter, we'll be investigating Generative Adversarial Networks (GANs)
[1], the first of three artificial intelligence algorithms that we'll be looking at. GANs
belong to the family of generative models. However, unlike autoencoders, generative
models are able to create new and meaningful outputs given arbitrary encodings.
In this chapter, the working principles of GANs will be discussed. We'll also review
the implementations of several early GANs within Keras. While later on the chapter,
we'll be demonstrating the techniques needed to achieve stable training. The scope
of this chapter covers two popular examples of GAN implementations, Deep
Convolutional GAN (DCGAN) [2] and Conditional GAN (CGAN) [3].
In summary, the goal of this chapter is to:
• Introduce the principles of GANs
• How to implement GANs such as DCGAN and CGAN in Keras
An overview of GANs
Before we move into the more advanced concepts of GANs, let's start by going
over GANs, and introducing the underlying concepts of them. GANs are very
powerful; this simple statement is proven by the fact that they can generate new
celebrity faces that are not of real people by performing latent space interpolations.
[ 99 ]