Advanced Deep Learning with Keras
Listing 7.1.3, cyclegan-7.1.1.py shows discriminator implementation in Keras:def build_discriminator(input_shape,kernel_size=3,patchgan=True,name=None):"""The discriminator is a 4-layer encoder that outputs eithera 1-dim or a n x n-dim patch of probability that input is realArguments:input_shape (tuple): input shapekernel_size (int): kernel size of decoder layerspatchgan (bool): whether the output is a patch or just a 1-dimname (string): name assigned to discriminator modelReturns:discriminator (Model):"""inputs = Input(shape=input_shape)x = encoder_layer(inputs,32,kernel_size=kernel_size,activation='leaky_relu',instance_norm=False)x = encoder_layer(x,64,kernel_size=kernel_size,activation='leaky_relu',instance_norm=False)x = encoder_layer(x,128,kernel_size=kernel_size,activation='leaky_relu',instance_norm=False)x = encoder_layer(x,256,kernel_size=kernel_size,strides=1,activation='leaky_relu',instance_norm=False)# if patchgan=True use nxn-dim output of probabilityChapter 7[ 217 ]
Cross-Domain GANs# else use 1-dim output of probabilityif patchgan:x = LeakyReLU(alpha=0.2)(x)outputs = Conv2D(1,kernel_size=kernel_size,strides=1,padding='same')(x)else:x = Flatten()(x)x = Dense(1)(x)outputs = Activation('linear')(x)discriminator = Model(inputs, outputs, name=name)return discriminatorUsing the generator and discriminator builders, we are now able to build theCycleGAN. Listing 7.1.4 shows the builder function. In line with our discussion inthe previous section, two generators, g_source = F and g_target = G, and twodiscriminators, d_source = D xand d_target = D yare instantiated. The forwardcycle is x ' = F(G(x)) = reco_source = g_source(g_target(source_input)).The backward cycle is y ' = G(F(y)) = reco_target = g_target(g_source(target_input)).The inputs to the adversarial model are the source and target data while the outputsare the outputs of D xand D yand the reconstructed inputs, x' and y.' The identitynetwork is not used in this example due to the difference between the numberof channels of the grayscale image and color image. We use the recommendedloss weights of λ1= 1.0 and λ2= 10.0 for the GAN and cyclic consistency lossesrespectively. Similar to GANs in the previous chapters, we use RMSprop witha learning rate of 2e-4 and decay rate of 6e-8 for the optimizer of the discriminators.The learning and decay rate for the adversarial is half of the discriminator's.Listing 7.1.4, cyclegan-7.1.1.py shows us the CycleGAN builder in Keras:def build_cyclegan(shapes,source_name='source',target_name='target',kernel_size=3,patchgan=False,identity=False):"""Build the CycleGAN[ 218 ]
- Page 183 and 184: Disentangled Representation GANsFol
- Page 185 and 186: Disentangled Representation GANs# A
- Page 187 and 188: Disentangled Representation GANsif
- Page 189 and 190: Disentangled Representation GANsLis
- Page 191 and 192: Disentangled Representation GANsdat
- Page 193 and 194: Disentangled Representation GANsy[b
- Page 195 and 196: Disentangled Representation GANspyt
- Page 197 and 198: Disentangled Representation GANsThe
- Page 199 and 200: Disentangled Representation GANsSta
- Page 201 and 202: Disentangled Representation GANs( )
- Page 203 and 204: Disentangled Representation GANsThe
- Page 205 and 206: Disentangled Representation GANsfea
- Page 207 and 208: Disentangled Representation GANs# f
- Page 209 and 210: Disentangled Representation GANslat
- Page 211 and 212: Disentangled Representation GANsDis
- Page 213 and 214: Disentangled Representation GANsz_d
- Page 215 and 216: Disentangled Representation GANs2.
- Page 217 and 218: Disentangled Representation GANsFig
- Page 220 and 221: Cross-Domain GANsIn computer vision
- Page 222 and 223: Chapter 7There are many more exampl
- Page 224 and 225: The CycleGAN ModelFigure 7.1.3 show
- Page 226 and 227: Chapter 7Repeat for n training step
- Page 228 and 229: Chapter 7Implementing CycleGAN usin
- Page 230 and 231: filters=16,kernel_size=3,strides=2,
- Page 232 and 233: Chapter 7kernel_size=kernel_size)e3
- Page 236 and 237: Chapter 71) Build target and source
- Page 238 and 239: Chapter 7preal_target,reco_source,r
- Page 240 and 241: size=batch_size)real_source = sourc
- Page 242 and 243: Chapter 7returndirs=dirs,show=True)
- Page 244 and 245: Chapter 7Figure 7.1.10: Color (from
- Page 246 and 247: [ 229 ]Chapter 7titles = ('MNIST pr
- Page 248 and 249: Chapter 7Figure 7.1.13: Style trans
- Page 250 and 251: Chapter 7Figure 7.1.15: The backwar
- Page 252: Chapter 7References1. Yuval Netzer
- Page 255 and 256: Variational Autoencoders (VAEs)In t
- Page 257 and 258: Variational Autoencoders (VAEs)Typi
- Page 259 and 260: Variational Autoencoders (VAEs)For
- Page 261 and 262: Variational Autoencoders (VAEs)VAEs
- Page 263 and 264: Variational Autoencoders (VAEs)outp
- Page 265 and 266: Variational Autoencoders (VAEs)Figu
- Page 267 and 268: Variational Autoencoders (VAEs)The
- Page 269 and 270: Variational Autoencoders (VAEs)Figu
- Page 271 and 272: Variational Autoencoders (VAEs)Prec
- Page 273 and 274: Variational Autoencoders (VAEs)shap
- Page 275 and 276: Variational Autoencoders (VAEs)cvae
- Page 277 and 278: Variational Autoencoders (VAEs)Figu
- Page 279 and 280: Variational Autoencoders (VAEs)Figu
- Page 281 and 282: Variational Autoencoders (VAEs)In F
- Page 283 and 284: Variational Autoencoders (VAEs)Figu
Cross-Domain GANs
# else use 1-dim output of probability
if patchgan:
x = LeakyReLU(alpha=0.2)(x)
outputs = Conv2D(1,
kernel_size=kernel_size,
strides=1,
padding='same')(x)
else:
x = Flatten()(x)
x = Dense(1)(x)
outputs = Activation('linear')(x)
discriminator = Model(inputs, outputs, name=name)
return discriminator
Using the generator and discriminator builders, we are now able to build the
CycleGAN. Listing 7.1.4 shows the builder function. In line with our discussion in
the previous section, two generators, g_source = F and g_target = G, and two
discriminators, d_source = D x
and d_target = D y
are instantiated. The forward
cycle is x ' = F(G(x)) = reco_source = g_source(g_target(source_input)).
The backward cycle is y ' = G(F(y)) = reco_target = g_target(g_source
(target_input)).
The inputs to the adversarial model are the source and target data while the outputs
are the outputs of D x
and D y
and the reconstructed inputs, x' and y.' The identity
network is not used in this example due to the difference between the number
of channels of the grayscale image and color image. We use the recommended
loss weights of λ
1
= 1.0 and λ
2
= 10.0 for the GAN and cyclic consistency losses
respectively. Similar to GANs in the previous chapters, we use RMSprop with
a learning rate of 2e-4 and decay rate of 6e-8 for the optimizer of the discriminators.
The learning and decay rate for the adversarial is half of the discriminator's.
Listing 7.1.4, cyclegan-7.1.1.py shows us the CycleGAN builder in Keras:
def build_cyclegan(shapes,
source_name='source',
target_name='target',
kernel_size=3,
patchgan=False,
identity=False
):
"""Build the CycleGAN
[ 218 ]