Advanced Deep Learning with Keras

fourpersent2020
from fourpersent2020 More from this publisher
16.03.2021 Views

Chapter 6Figure 6.2.4: A simpler version of Figure 6.2.3 showing only the network elements( G)condinvolved in the computation of LThe conditional loss function, however, introduces a new problem for us. The generator( G)condignores the input noise code, z iand simply relies on f i+1. Entropy loss function, L0in Equation 6.2.4, ensures that the generator does not ignore the noise code, z i. TheQ-Network recovers the noise code from the output of the generator. The differencebetween the recovered noise and the input noise is also measured by L2 or the MSE.( G)entFollowing figure shows the network elements involved in the computation of L :00Figure 6.2.5: A simpler version of Figure 6.2.3 only showing us the network elements involved in the( G)entcomputation of L[ 185 ]0

Disentangled Representation GANsThe last loss function is similar to the usual GAN loss. It's made of a discriminator( D)( G) loss Li and a generator (through adversarial) loss L advi . Following figure showsus the elements involved in the GAN loss:Figure 6.2.6: A simpler version of Figure 6.2.3 showing only the network elements( D)( G) involved in the computation of Liand L adviIn Equation 6.2.5, the weighted sum of the three generator loss functions is the finalgenerator loss function. In the Keras code that we will present, all the weights areset to 1.0, except for the entropy loss which is set to 10.0. In Equation 6.2.1 to Equation6.2.5, i refers to the encoder and GAN group id or level. In the original paper, thenetwork is first trained independently and then jointly. During independent training,the encoder is trained first. During joint training, both real and fake data are used.The implementation of the StackedGAN generator and discriminator in Kerasrequires few changes to provide auxiliary points to access the intermediate features.Figure 6.2.7 shows the generator Keras model. Listing 6.2.2 illustrates the functionthat builds two generators (gen0 and gen1) corresponding to Generator 0andGenerator 1. The gen1 generator is made of three Dense layers with label and the noisecode z 1fas inputs. The third layer generates the fake f 1ffeature. The gen0 generator issimilar to other GAN generators that we've presented and can be instantiated usingthe generator builder in gan.py:# gen0: feature1 + z0 to feature0 (image)gen0 = gan.generator(feature1, image_size, codes=z0)The gen0 input is f 1features and the noise code z 0. The output is the generated fakeimage, x f:[ 186 ]

Chapter 6

Figure 6.2.4: A simpler version of Figure 6.2.3 showing only the network elements

( G)

cond

involved in the computation of L

The conditional loss function, however, introduces a new problem for us. The generator

( G)

cond

ignores the input noise code, z i

and simply relies on f i+1

. Entropy loss function, L

0

in Equation 6.2.4, ensures that the generator does not ignore the noise code, z i

. The

Q-Network recovers the noise code from the output of the generator. The difference

between the recovered noise and the input noise is also measured by L2 or the MSE.

( G)

ent

Following figure shows the network elements involved in the computation of L :

0

0

Figure 6.2.5: A simpler version of Figure 6.2.3 only showing us the network elements involved in the

( G)

ent

computation of L

[ 185 ]

0

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!