Advanced Deep Learning with Keras
Chapter 6Figure 6.2.4: A simpler version of Figure 6.2.3 showing only the network elements( G)condinvolved in the computation of LThe conditional loss function, however, introduces a new problem for us. The generator( G)condignores the input noise code, z iand simply relies on f i+1. Entropy loss function, L0in Equation 6.2.4, ensures that the generator does not ignore the noise code, z i. TheQ-Network recovers the noise code from the output of the generator. The differencebetween the recovered noise and the input noise is also measured by L2 or the MSE.( G)entFollowing figure shows the network elements involved in the computation of L :00Figure 6.2.5: A simpler version of Figure 6.2.3 only showing us the network elements involved in the( G)entcomputation of L[ 185 ]0
Disentangled Representation GANsThe last loss function is similar to the usual GAN loss. It's made of a discriminator( D)( G) loss Li and a generator (through adversarial) loss L advi . Following figure showsus the elements involved in the GAN loss:Figure 6.2.6: A simpler version of Figure 6.2.3 showing only the network elements( D)( G) involved in the computation of Liand L adviIn Equation 6.2.5, the weighted sum of the three generator loss functions is the finalgenerator loss function. In the Keras code that we will present, all the weights areset to 1.0, except for the entropy loss which is set to 10.0. In Equation 6.2.1 to Equation6.2.5, i refers to the encoder and GAN group id or level. In the original paper, thenetwork is first trained independently and then jointly. During independent training,the encoder is trained first. During joint training, both real and fake data are used.The implementation of the StackedGAN generator and discriminator in Kerasrequires few changes to provide auxiliary points to access the intermediate features.Figure 6.2.7 shows the generator Keras model. Listing 6.2.2 illustrates the functionthat builds two generators (gen0 and gen1) corresponding to Generator 0andGenerator 1. The gen1 generator is made of three Dense layers with label and the noisecode z 1fas inputs. The third layer generates the fake f 1ffeature. The gen0 generator issimilar to other GAN generators that we've presented and can be instantiated usingthe generator builder in gan.py:# gen0: feature1 + z0 to feature0 (image)gen0 = gan.generator(feature1, image_size, codes=z0)The gen0 input is f 1features and the noise code z 0. The output is the generated fakeimage, x f:[ 186 ]
- Page 151 and 152: Improved GANsFigure 5.1.3: Top: Tra
- Page 153 and 154: Improved GANsThe functions include:
- Page 155 and 156: Improved GANsmodels = (generator, d
- Page 157 and 158: Improved GANsfor layer in discrimin
- Page 159 and 160: Improved GANsFollowing figure shows
- Page 161 and 162: Improved GANsThe preceding table sh
- Page 163 and 164: Improved GANsFollowing figure shows
- Page 165 and 166: Improved GANsEssentially, in CGAN w
- Page 167 and 168: Improved GANslayer = Dense(layer_fi
- Page 169 and 170: Improved GANsx = BatchNormalization
- Page 171 and 172: Improved GANsdiscriminator.compile(
- Page 173 and 174: Improved GANssize=batch_size)real_i
- Page 175 and 176: Improved GANsUnlike CGAN, the sampl
- Page 177 and 178: Improved GANsConclusionIn this chap
- Page 179 and 180: Disentangled Representation GANsIn
- Page 181 and 182: Disentangled Representation GANsInf
- Page 183 and 184: Disentangled Representation GANsFol
- Page 185 and 186: Disentangled Representation GANs# A
- Page 187 and 188: Disentangled Representation GANsif
- Page 189 and 190: Disentangled Representation GANsLis
- Page 191 and 192: Disentangled Representation GANsdat
- Page 193 and 194: Disentangled Representation GANsy[b
- Page 195 and 196: Disentangled Representation GANspyt
- Page 197 and 198: Disentangled Representation GANsThe
- Page 199 and 200: Disentangled Representation GANsSta
- Page 201: Disentangled Representation GANs( )
- Page 205 and 206: Disentangled Representation GANsfea
- Page 207 and 208: Disentangled Representation GANs# f
- Page 209 and 210: Disentangled Representation GANslat
- Page 211 and 212: Disentangled Representation GANsDis
- Page 213 and 214: Disentangled Representation GANsz_d
- Page 215 and 216: Disentangled Representation GANs2.
- Page 217 and 218: Disentangled Representation GANsFig
- Page 220 and 221: Cross-Domain GANsIn computer vision
- Page 222 and 223: Chapter 7There are many more exampl
- Page 224 and 225: The CycleGAN ModelFigure 7.1.3 show
- Page 226 and 227: Chapter 7Repeat for n training step
- Page 228 and 229: Chapter 7Implementing CycleGAN usin
- Page 230 and 231: filters=16,kernel_size=3,strides=2,
- Page 232 and 233: Chapter 7kernel_size=kernel_size)e3
- Page 234 and 235: Listing 7.1.3, cyclegan-7.1.1.py sh
- Page 236 and 237: Chapter 71) Build target and source
- Page 238 and 239: Chapter 7preal_target,reco_source,r
- Page 240 and 241: size=batch_size)real_source = sourc
- Page 242 and 243: Chapter 7returndirs=dirs,show=True)
- Page 244 and 245: Chapter 7Figure 7.1.10: Color (from
- Page 246 and 247: [ 229 ]Chapter 7titles = ('MNIST pr
- Page 248 and 249: Chapter 7Figure 7.1.13: Style trans
- Page 250 and 251: Chapter 7Figure 7.1.15: The backwar
Chapter 6
Figure 6.2.4: A simpler version of Figure 6.2.3 showing only the network elements
( G)
cond
involved in the computation of L
The conditional loss function, however, introduces a new problem for us. The generator
( G)
cond
ignores the input noise code, z i
and simply relies on f i+1
. Entropy loss function, L
0
in Equation 6.2.4, ensures that the generator does not ignore the noise code, z i
. The
Q-Network recovers the noise code from the output of the generator. The difference
between the recovered noise and the input noise is also measured by L2 or the MSE.
( G)
ent
Following figure shows the network elements involved in the computation of L :
0
0
Figure 6.2.5: A simpler version of Figure 6.2.3 only showing us the network elements involved in the
( G)
ent
computation of L
[ 185 ]
0