Advanced Deep Learning with Keras
Chapter 4discriminator.trainable = False# adversarial = generator + discriminatoradversarial = Model(inputs,discriminator(generator(inputs)),name=model_name)adversarial.compile(loss='binary_crossentropy',optimizer=optimizer,metrics=['accuracy'])adversarial.summary()# train discriminator and adversarial networksmodels = (generator, discriminator, adversarial)params = (batch_size, latent_size, train_steps, model_name)train(models, x_train, params)Listing 4.2.4 shows the function dedicated to training the discriminator andadversarial networks. Due to custom training, the usual fit() function is not goingto be used. Instead, train_on_batch() is called up to run a single gradient updatefor the given batch of data. The generator is then trained via an adversarial network.The training first randomly picks a batch of real images from the dataset. This islabeled as real (1.0). Then a batch of fake images will be generated by the generator.This is labeled as fake (0.0). The two batches are concatenated and are used to trainthe discriminator.After this is completed, a new batch of fake images will be generated by thegenerator and labeled as real (1.0). This batch will be used to train the adversarialnetwork. The two networks are trained alternately for about 40,000 steps. At regularintervals, the generated MNIST digits based on a certain noise vector are saved onthe filesystem. At the last training step, the network has converged. The generatormodel is also saved on a file so we can easily reuse the trained model for futureMNIST digits generation. However, only the generator model is saved since thatis the useful part of GANs in the generation of new MNIST digits. For example,we can generate new and random MNIST digits by executing:python3 dcgan-mnist-4.2.1.py --generator=dcgan_mnist.h5Listing 4.2.4, dcgan-mnist-4.2.1.py shows us the function to train thediscriminator and adversarial networks:def train(models, x_train, params):"""Train the Discriminator and Adversarial NetworksAlternately train Discriminaor and Adversarial networks by batch.Discriminator is trained first with properly real and fake images.Adversarial is trained next with fake images pretending to be realGenerate sample images per save_interval.[ 111 ]
Generative Adversarial Networks (GANs)# Argumentsmodels (list): Generator, Discriminator, Adversarial modelsx_train (tensor): Train imagesparams (list) : Networks parameters"""# the GAN modelsgenerator, discriminator, adversarial = models# network parametersbatch_size, latent_size, train_steps, model_name = params# the generator image is saved every 500 stepssave_interval = 500# noise vector to see how the generator output evolves# during trainingnoise_input = np.random.uniform(-1.0, 1.0, size=[16, latent_size])# number of elements in train datasettrain_size = x_train.shape[0]for i in range(train_steps):# train the discriminator for 1 batch# 1 batch of real (label=1.0) and fake images (label=0.0)# randomly pick real images from datasetrand_indexes = np.random.randint(0, train_size, size=batch_size)real_images = x_train[rand_indexes]# generate fake images from noise using generator# generate noise using uniform distributionnoise = np.random.uniform(-1.0, 1.0, size=[batch_size, latent_size])# generate fake imagesfake_images = generator.predict(noise)# real + fake images = 1 batch of train datax = np.concatenate((real_images, fake_images))# label real and fake images# real images label is 1.0y = np.ones([2 * batch_size, 1])# fake images label is 0.0y[batch_size:, :] = 0.0# train discriminator network, log the loss and accuracyloss, acc = discriminator.train_on_batch(x, y)log = "%d: [discriminator loss: %f, acc: %f]" % (i, loss, acc)network# train the adversarial network for 1 batch# 1 batch of fake images with label=1.0# since the discriminator weights are frozen in adversarial[ 112 ]
- Page 77 and 78: Deep Neural NetworksResNet v2 is al
- Page 79 and 80: Deep Neural Networks…if version =
- Page 81 and 82: Deep Neural NetworksTo prevent the
- Page 83 and 84: Deep Neural NetworksAverage Pooling
- Page 85 and 86: Deep Neural Networks# orig paper us
- Page 88 and 89: AutoencodersIn the previous chapter
- Page 90 and 91: Chapter 3The autoencoder has the te
- Page 92 and 93: Chapter 3Firstly, we're going to im
- Page 94 and 95: Chapter 3# reconstruct the inputout
- Page 96 and 97: Chapter 3Figure 3.2.2: The decoder
- Page 98 and 99: batch_size=32,model_name="autoencod
- Page 100 and 101: Chapter 3Figure 3.2.6: Digits gener
- Page 102 and 103: Chapter 3As shown in Figure 3.3.2,
- Page 104 and 105: Chapter 3image_size = x_train.shape
- Page 106 and 107: Chapter 3# Mean Square Error (MSE)
- Page 108 and 109: Chapter 3from keras.layers import R
- Page 110 and 111: Chapter 3# build the autoencoder mo
- Page 112 and 113: Chapter 3x_train,validation_data=(x
- Page 114: Chapter 3ConclusionIn this chapter,
- Page 117 and 118: Generative Adversarial Networks (GA
- Page 119 and 120: Generative Adversarial Networks (GA
- Page 121 and 122: Generative Adversarial Networks (GA
- Page 123 and 124: Generative Adversarial Networks (GA
- Page 125 and 126: Generative Adversarial Networks (GA
- Page 127: Generative Adversarial Networks (GA
- Page 131 and 132: Generative Adversarial Networks (GA
- Page 133 and 134: Generative Adversarial Networks (GA
- Page 135 and 136: Generative Adversarial Networks (GA
- Page 137 and 138: Generative Adversarial Networks (GA
- Page 139 and 140: Generative Adversarial Networks (GA
- Page 141 and 142: Generative Adversarial Networks (GA
- Page 143 and 144: Improved GANsIn summary, the goal o
- Page 145 and 146: Improved GANsThe intuition behind E
- Page 147 and 148: Improved GANsThis makes sense since
- Page 149 and 150: Improved GANsIn the context of GANs
- Page 151 and 152: Improved GANsFigure 5.1.3: Top: Tra
- Page 153 and 154: Improved GANsThe functions include:
- Page 155 and 156: Improved GANsmodels = (generator, d
- Page 157 and 158: Improved GANsfor layer in discrimin
- Page 159 and 160: Improved GANsFollowing figure shows
- Page 161 and 162: Improved GANsThe preceding table sh
- Page 163 and 164: Improved GANsFollowing figure shows
- Page 165 and 166: Improved GANsEssentially, in CGAN w
- Page 167 and 168: Improved GANslayer = Dense(layer_fi
- Page 169 and 170: Improved GANsx = BatchNormalization
- Page 171 and 172: Improved GANsdiscriminator.compile(
- Page 173 and 174: Improved GANssize=batch_size)real_i
- Page 175 and 176: Improved GANsUnlike CGAN, the sampl
- Page 177 and 178: Improved GANsConclusionIn this chap
Generative Adversarial Networks (GANs)
# Arguments
models (list): Generator, Discriminator, Adversarial models
x_train (tensor): Train images
params (list) : Networks parameters
"""
# the GAN models
generator, discriminator, adversarial = models
# network parameters
batch_size, latent_size, train_steps, model_name = params
# the generator image is saved every 500 steps
save_interval = 500
# noise vector to see how the generator output evolves
# during training
noise_input = np.random.uniform(-1.0, 1.0, size=[16, latent_size])
# number of elements in train dataset
train_size = x_train.shape[0]
for i in range(train_steps):
# train the discriminator for 1 batch
# 1 batch of real (label=1.0) and fake images (label=0.0)
# randomly pick real images from dataset
rand_indexes = np.random.randint(0, train_size, size=batch_
size)
real_images = x_train[rand_indexes]
# generate fake images from noise using generator
# generate noise using uniform distribution
noise = np.random.uniform(-1.0, 1.0, size=[batch_size, latent_
size])
# generate fake images
fake_images = generator.predict(noise)
# real + fake images = 1 batch of train data
x = np.concatenate((real_images, fake_images))
# label real and fake images
# real images label is 1.0
y = np.ones([2 * batch_size, 1])
# fake images label is 0.0
y[batch_size:, :] = 0.0
# train discriminator network, log the loss and accuracy
loss, acc = discriminator.train_on_batch(x, y)
log = "%d: [discriminator loss: %f, acc: %f]" % (i, loss, acc)
network
# train the adversarial network for 1 batch
# 1 batch of fake images with label=1.0
# since the discriminator weights are frozen in adversarial
[ 112 ]