Advanced Deep Learning with Keras
Chapter 5x_train = x_train.astype('float32') / 255model_name = "wgan_mnist"# network parameters# the latent or z vector is 100-dimlatent_size = 100# hyper parameters from WGAN paper [2]n_critic = 5clip_value = 0.01batch_size = 64lr = 5e-5train_steps = 40000input_shape = (image_size, image_size, 1)# build discriminator modelinputs = Input(shape=input_shape, name='discriminator_input')# WGAN uses linear activation in paper [2]discriminator = gan.discriminator(inputs, activation='linear')optimizer = RMSprop(lr=lr)# WGAN discriminator uses wassertein lossdiscriminator.compile(loss=wasserstein_loss,optimizer=optimizer,metrics=['accuracy'])discriminator.summary()# build generator modelinput_shape = (latent_size, )inputs = Input(shape=input_shape, name='z_input')generator = gan.generator(inputs, image_size)generator.summary()# build adversarial model = generator + discriminator# freeze the weights of discriminator# during adversarial trainingdiscriminator.trainable = Falseadversarial = Model(inputs,discriminator(generator(inputs)),name=model_name)adversarial.compile(loss=wasserstein_loss,optimizer=optimizer,metrics=['accuracy'])adversarial.summary()# train discriminator and adversarial networks[ 137 ]
Improved GANsmodels = (generator, discriminator, adversarial)params = (batch_size,latent_size,n_critic,clip_value,train_steps,model_name)train(models, x_train, params)Listing 5.1.2, wgan-mnist-5.1.2.py. The training procedure for WGAN closelyfollows Algorithm 5.1.1. The discriminator is trained n criticiterations per 1 generatortraining iteration:def train(models, x_train, params):"""Train the Discriminator and Adversarial NetworksAlternately train Discriminator and Adversarial networks by batch.Discriminator is trained first with properly labeled real and fakeimagesfor n_critic times.Discriminator weights are clipped as a requirement of Lipschitzconstraint.Generator is trained next (via Adversarial) with fake imagespretending to be real.Generate sample images per save_interval# Argumentsmodels (list): Generator, Discriminator, Adversarial modelsx_train (tensor): Train imagesparams (list) : Networks parameters"""# the GAN modelsgenerator, discriminator, adversarial = models# network parameters(batch_size, latent_size, n_critic,clip_value, train_steps, model_name) = params# the generator image is saved every 500 stepssave_interval = 500# noise vector to see how the generator output# evolves during trainingnoise_input = np.random.uniform(-1.0, 1.0, size=[16,latent_size])# number of elements in train datasettrain_size = x_train.shape[0][ 138 ]
- Page 104 and 105: Chapter 3image_size = x_train.shape
- Page 106 and 107: Chapter 3# Mean Square Error (MSE)
- Page 108 and 109: Chapter 3from keras.layers import R
- Page 110 and 111: Chapter 3# build the autoencoder mo
- Page 112 and 113: Chapter 3x_train,validation_data=(x
- Page 114: Chapter 3ConclusionIn this chapter,
- Page 117 and 118: Generative Adversarial Networks (GA
- Page 119 and 120: Generative Adversarial Networks (GA
- Page 121 and 122: Generative Adversarial Networks (GA
- Page 123 and 124: Generative Adversarial Networks (GA
- Page 125 and 126: Generative Adversarial Networks (GA
- Page 127 and 128: Generative Adversarial Networks (GA
- Page 129 and 130: Generative Adversarial Networks (GA
- Page 131 and 132: Generative Adversarial Networks (GA
- Page 133 and 134: Generative Adversarial Networks (GA
- Page 135 and 136: Generative Adversarial Networks (GA
- Page 137 and 138: Generative Adversarial Networks (GA
- Page 139 and 140: Generative Adversarial Networks (GA
- Page 141 and 142: Generative Adversarial Networks (GA
- Page 143 and 144: Improved GANsIn summary, the goal o
- Page 145 and 146: Improved GANsThe intuition behind E
- Page 147 and 148: Improved GANsThis makes sense since
- Page 149 and 150: Improved GANsIn the context of GANs
- Page 151 and 152: Improved GANsFigure 5.1.3: Top: Tra
- Page 153: Improved GANsThe functions include:
- Page 157 and 158: Improved GANsfor layer in discrimin
- Page 159 and 160: Improved GANsFollowing figure shows
- Page 161 and 162: Improved GANsThe preceding table sh
- Page 163 and 164: Improved GANsFollowing figure shows
- Page 165 and 166: Improved GANsEssentially, in CGAN w
- Page 167 and 168: Improved GANslayer = Dense(layer_fi
- Page 169 and 170: Improved GANsx = BatchNormalization
- Page 171 and 172: Improved GANsdiscriminator.compile(
- Page 173 and 174: Improved GANssize=batch_size)real_i
- Page 175 and 176: Improved GANsUnlike CGAN, the sampl
- Page 177 and 178: Improved GANsConclusionIn this chap
- Page 179 and 180: Disentangled Representation GANsIn
- Page 181 and 182: Disentangled Representation GANsInf
- Page 183 and 184: Disentangled Representation GANsFol
- Page 185 and 186: Disentangled Representation GANs# A
- Page 187 and 188: Disentangled Representation GANsif
- Page 189 and 190: Disentangled Representation GANsLis
- Page 191 and 192: Disentangled Representation GANsdat
- Page 193 and 194: Disentangled Representation GANsy[b
- Page 195 and 196: Disentangled Representation GANspyt
- Page 197 and 198: Disentangled Representation GANsThe
- Page 199 and 200: Disentangled Representation GANsSta
- Page 201 and 202: Disentangled Representation GANs( )
- Page 203 and 204: Disentangled Representation GANsThe
Improved GANs
models = (generator, discriminator, adversarial)
params = (batch_size,
latent_size,
n_critic,
clip_value,
train_steps,
model_name)
train(models, x_train, params)
Listing 5.1.2, wgan-mnist-5.1.2.py. The training procedure for WGAN closely
follows Algorithm 5.1.1. The discriminator is trained n critic
iterations per 1 generator
training iteration:
def train(models, x_train, params):
"""Train the Discriminator and Adversarial Networks
Alternately train Discriminator and Adversarial networks by batch.
Discriminator is trained first with properly labeled real and fake
images
for n_critic times.
Discriminator weights are clipped as a requirement of Lipschitz
constraint.
Generator is trained next (via Adversarial) with fake images
pretending to be real.
Generate sample images per save_interval
# Arguments
models (list): Generator, Discriminator, Adversarial models
x_train (tensor): Train images
params (list) : Networks parameters
"""
# the GAN models
generator, discriminator, adversarial = models
# network parameters
(batch_size, latent_size, n_critic,
clip_value, train_steps, model_name) = params
# the generator image is saved every 500 steps
save_interval = 500
# noise vector to see how the generator output
# evolves during training
noise_input = np.random.uniform(-1.0, 1.0, size=[16,
latent_size])
# number of elements in train dataset
train_size = x_train.shape[0]
[ 138 ]