16.03.2021 Views

Advanced Deep Learning with Keras

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

The new loss function of the generator minimizes the correct prediction of the

discriminator on fake images conditioned on the specified one-hot labels. The

generator learns how to generate the specific MNIST digit given its one-hot

vector that can fool the discriminator. The following figure shows how to train

the generator:

Chapter 4

Figure 4.3.3: Training the CGAN generator through the adversarial network is similar to training

GAN generator. The only difference is the generated fake images are conditioned with one-hot labels.

Following listing highlights the minor changes needed in the discriminator model.

The code processes the one-hot vector using a Dense layer and concatenates it with the

image input. The Model instance is modified for the image and one-hot vector inputs.

Listing 4.3.1, cgan-mnist-4.3.1.py shows us the CGAN discriminator. In highlight

are the changes made in DCGAN.

def build_discriminator(inputs, y_labels, image_size):

"""Build a Discriminator Model

Inputs are concatenated after Dense layer.

Stack of LeakyReLU-Conv2D to discriminate real from fake.

The network does not converge with BN so it is not used here

unlike in DCGAN paper.

# Arguments

inputs (Layer): Input layer of the discriminator (the image)

y_labels (Layer): Input layer for one-hot vector to condition

the inputs

image_size: Target size of one side (assuming square image)

[ 117 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!