Advanced Deep Learning with Keras
Chapter 6z_dim])z_dim])real_z1 = np.random.normal(scale=0.5, size=[batch_size,# real labels from datasetreal_labels = y_train[rand_indexes]# generate fake feature1 using generator1 from# real labels and 50-dim z1 latent codefake_z1 = np.random.normal(scale=0.5, size=[batch_size,fake_feature1 = gen1.predict([real_labels, fake_z1])# real + fake datafeature1 = np.concatenate((real_feature1, fake_feature1))z1 = np.concatenate((fake_z1, fake_z1))# label 1st half as real and 2nd half as fakey = np.ones([2 * batch_size, 1])y[batch_size:, :] = 0# train discriminator1 to classify feature1# as real/fake and recover# latent code (z1). real = from encoder1,# fake = from genenerator1# joint training using discriminator part of advserial1 loss# and entropy1 lossmetrics = dis1.train_on_batch(feature1, [y, z1])# log the overall loss only (fr dis1.metrics_names)log = "%d: [dis1_loss: %f]" % (i, metrics[0])z_dim])# train the discriminator0 for 1 batch# 1 batch of real (label=1.0) and fake images (label=0.0)# generate random 50-dim z0 latent codefake_z0 = np.random.normal(scale=0.5, size=[batch_size,# generate fake images from real feature1 and fake z0fake_images = gen0.predict([real_feature1, fake_z0])# real + fake datax = np.concatenate((real_images, fake_images))z0 = np.concatenate((fake_z0, fake_z0))# train discriminator0 to classify image as real/fakeand recover# latent code (z0)[ 195 ]
Disentangled Representation GANsz_dim])# joint training using discriminator part of advserial0 loss# and entropy0 lossmetrics = dis0.train_on_batch(x, [y, z0])# log the overall loss only (fr dis0.metrics_names)log = "%s [dis0_loss: %f]" % (log, metrics[0])# adversarial training# generate fake z1, labelsfake_z1 = np.random.normal(scale=0.5, size=[batch_size,# input to generator1 is sampling fr real labels and# 50-dim z1 latent codegen1_inputs = [real_labels, fake_z1]# label fake feature1 as realy = np.ones([batch_size, 1])# train generator1 (thru adversarial) by# fooling the discriminator# and approximating encoder1 feature1 generator# joint training: adversarial1, entropy1, conditional1metrics = adv1.train_on_batch(gen1_inputs, [y, fake_z1,real_labels])fmt = "%s [adv1_loss: %f, enc1_acc: %f]"# log the overall loss and classification accuracylog = fmt % (log, metrics[0], metrics[6])z_dim])# input to generator0 is real feature1 and# 50-dim z0 latent codefake_z0 = np.random.normal(scale=0.5, size=[batch_size,gen0_inputs = [real_feature1, fake_z0]# train generator0 (thru adversarial) by# fooling the discriminator# and approximating encoder1 image source generator# joint training: adversarial0, entropy0, conditional0metrics = adv0.train_on_batch(gen0_inputs, [y, fake_z0,real_feature1])# log the overall loss onlylog = "%s [adv0_loss: %f]" % (log, metrics[0])print(log)[ 196 ]
- Page 161 and 162: Improved GANsThe preceding table sh
- Page 163 and 164: Improved GANsFollowing figure shows
- Page 165 and 166: Improved GANsEssentially, in CGAN w
- Page 167 and 168: Improved GANslayer = Dense(layer_fi
- Page 169 and 170: Improved GANsx = BatchNormalization
- Page 171 and 172: Improved GANsdiscriminator.compile(
- Page 173 and 174: Improved GANssize=batch_size)real_i
- Page 175 and 176: Improved GANsUnlike CGAN, the sampl
- Page 177 and 178: Improved GANsConclusionIn this chap
- Page 179 and 180: Disentangled Representation GANsIn
- Page 181 and 182: Disentangled Representation GANsInf
- Page 183 and 184: Disentangled Representation GANsFol
- Page 185 and 186: Disentangled Representation GANs# A
- Page 187 and 188: Disentangled Representation GANsif
- Page 189 and 190: Disentangled Representation GANsLis
- Page 191 and 192: Disentangled Representation GANsdat
- Page 193 and 194: Disentangled Representation GANsy[b
- Page 195 and 196: Disentangled Representation GANspyt
- Page 197 and 198: Disentangled Representation GANsThe
- Page 199 and 200: Disentangled Representation GANsSta
- Page 201 and 202: Disentangled Representation GANs( )
- Page 203 and 204: Disentangled Representation GANsThe
- Page 205 and 206: Disentangled Representation GANsfea
- Page 207 and 208: Disentangled Representation GANs# f
- Page 209 and 210: Disentangled Representation GANslat
- Page 211: Disentangled Representation GANsDis
- Page 215 and 216: Disentangled Representation GANs2.
- Page 217 and 218: Disentangled Representation GANsFig
- Page 220 and 221: Cross-Domain GANsIn computer vision
- Page 222 and 223: Chapter 7There are many more exampl
- Page 224 and 225: The CycleGAN ModelFigure 7.1.3 show
- Page 226 and 227: Chapter 7Repeat for n training step
- Page 228 and 229: Chapter 7Implementing CycleGAN usin
- Page 230 and 231: filters=16,kernel_size=3,strides=2,
- Page 232 and 233: Chapter 7kernel_size=kernel_size)e3
- Page 234 and 235: Listing 7.1.3, cyclegan-7.1.1.py sh
- Page 236 and 237: Chapter 71) Build target and source
- Page 238 and 239: Chapter 7preal_target,reco_source,r
- Page 240 and 241: size=batch_size)real_source = sourc
- Page 242 and 243: Chapter 7returndirs=dirs,show=True)
- Page 244 and 245: Chapter 7Figure 7.1.10: Color (from
- Page 246 and 247: [ 229 ]Chapter 7titles = ('MNIST pr
- Page 248 and 249: Chapter 7Figure 7.1.13: Style trans
- Page 250 and 251: Chapter 7Figure 7.1.15: The backwar
- Page 252: Chapter 7References1. Yuval Netzer
- Page 255 and 256: Variational Autoencoders (VAEs)In t
- Page 257 and 258: Variational Autoencoders (VAEs)Typi
- Page 259 and 260: Variational Autoencoders (VAEs)For
- Page 261 and 262: Variational Autoencoders (VAEs)VAEs
Disentangled Representation GANs
z_dim])
# joint training using discriminator part of advserial0 loss
# and entropy0 loss
metrics = dis0.train_on_batch(x, [y, z0])
# log the overall loss only (fr dis0.metrics_names)
log = "%s [dis0_loss: %f]" % (log, metrics[0])
# adversarial training
# generate fake z1, labels
fake_z1 = np.random.normal(scale=0.5, size=[batch_size,
# input to generator1 is sampling fr real labels and
# 50-dim z1 latent code
gen1_inputs = [real_labels, fake_z1]
# label fake feature1 as real
y = np.ones([batch_size, 1])
# train generator1 (thru adversarial) by
# fooling the discriminator
# and approximating encoder1 feature1 generator
# joint training: adversarial1, entropy1, conditional1
metrics = adv1.train_on_batch(gen1_inputs, [y, fake_z1,
real_labels])
fmt = "%s [adv1_loss: %f, enc1_acc: %f]"
# log the overall loss and classification accuracy
log = fmt % (log, metrics[0], metrics[6])
z_dim])
# input to generator0 is real feature1 and
# 50-dim z0 latent code
fake_z0 = np.random.normal(scale=0.5, size=[batch_size,
gen0_inputs = [real_feature1, fake_z0]
# train generator0 (thru adversarial) by
# fooling the discriminator
# and approximating encoder1 image source generator
# joint training: adversarial0, entropy0, conditional0
metrics = adv0.train_on_batch(gen0_inputs, [y, fake_z0,
real_feature1])
# log the overall loss only
log = "%s [adv0_loss: %f]" % (log, metrics[0])
print(log)
[ 196 ]