16.03.2021 Views

Advanced Deep Learning with Keras

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Chapter 4

# only the generator is trained

# generate noise using uniform distribution

noise = np.random.uniform(-1.0, 1.0, size=[batch_size, latent_

size])

# label fake images as real or 1.0

y = np.ones([batch_size, 1])

# train the adversarial network

# note that unlike in discriminator training,

# we do not save the fake images in a variable

# the fake images go to the discriminator input of the

adversarial

# for classification

# log the loss and accuracy

loss, acc = adversarial.train_on_batch(noise, y)

log = "%s [adversarial loss: %f, acc: %f]" % (log, loss, acc)

print(log)

if (i + 1) % save_interval == 0:

if (i + 1) == train_steps:

show = True

else:

show = False

# plot generator images on a periodic basis

plot_images(generator,

noise_input=noise_input,

show=show,

step=(i + 1),

model_name=model_name)

# save the model after training the generator

# the trained generator can be reloaded for future MNIST digit

generation

generator.save(model_name + ".h5")

Figure 4.2.1 shows the evolution of fake images from the generator as a function

of training steps. At 5,000 steps, the generator is already producing recognizable

images. It's very much like having an agent that knows how to draw digits. It's worth

noting that some digits change from one recognizable form (for example, 8 on the 2 nd

column of the last row) to another (for example, 0). When the training converges, the

discriminator loss reaches near 0.5 while the adversarial loss approaches near 1.0 as

follows:

39997: [discriminator loss: 0.423329, acc: 0.796875] [adversarial loss:

0.819355, acc: 0.484375]

39998: [discriminator loss: 0.471747, acc: 0.773438] [adversarial loss:

1.570030, acc: 0.203125]

[ 113 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!