16.03.2021 Views

Advanced Deep Learning with Keras

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Generative Adversarial Networks (GANs)

The generator learns to generate fake images from a 100-dim input vector and

a specified digit. The discriminator classifies real from fake images based on real

and fake images and their corresponding labels.

The basis of CGAN is still the same as the original GAN principle except that

the discriminator and generator inputs are conditioned on one-hot labels, y.

By incorporating this condition in Equations 4.1.1 and 4.1.5, the loss functions for

the discriminator and generator are shown in Equations 4.3.1 and 4.3.2 respectively.

Given Figure 4.3.2, it may be more appropriate to write the loss functions as:

and

( D) ( G) ( D)

( θ , θ ) =−E

x~

p

log ( x | y) −Ez

log( 1 − ( ( z | y′ ) | y′

))

( D) ( G) ( D)

L D D G

L

data

( θ , θ ) =−E

z

log D( G( z | y′ ) | y′

)

( G) ( G) ( D)

( θ , θ ) =−E

x~

p

log ( x | y) −Ez

log( 1 − ( ( z | y′

)))

L D D G (Equation 4.3.1)

data

.

( θ , θ ) =−E

z

log D( ( z | y′

))

( G) ( G) ( D)

L G (Equation 4.3.2)

The new loss function of the discriminator aims to minimize the error of predicting

real images coming from the dataset and fake images coming from the generator

given their one-hot labels. Figure 4.3.2 shows how to train the discriminator.

Figure 4.3.2: Training the CGAN discriminator is similar to training the GAN discriminator.

The only difference is both the generated fake and the dataset's real images are conditioned

with their corresponding one-hot labels.

[ 116 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!