09.05.2023 Views

pdfcoffee

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

[ 193 ]

Chapter 6

Eventually, we reach a state where the improvement is not significant in either

player. We check this by plotting the loss function, to see when the two losses

(gradient loss and discriminator loss) reach a plateau. We don't want the game to

be skewed too heavily one way; if the forger were to immediately learn how to fool

the judge in every occasion, then the forger has nothing more to learn. Practically

training GANs is really hard, and a lot of research is being done in analyzing GAN

convergence; check this site: https://avg.is.tuebingen.mpg.de/projects/

convergence-and-stability-of-gan-training for details on convergence and

stability of different types of GANs. In generative applications of GAN, we want the

generator to learn a little better than the discriminator.

Let's now delve deep into how GANs learn. Both the discriminator and generator

take turns to learn. The learning can be divided into two steps:

1. Here the discriminator, D(x), learns. The generator, G(z), is used to generate

fake images from random noise z (which follows some prior distribution

P(z)). The fake images from the generator and the real images from the

training dataset are both fed to the discriminator and it performs supervised

learning trying to separate fake from real. If P data (x) is the training dataset

distribution, then the discriminator network tries to maximize its objective

so that D(x) is close to 1 when the input data is real and close to zero when

the input data is fake.

2. In the next step, the generator network learns. Its goal is to fool the

discriminator network into thinking that generated G(z) is real, that is,

force D(G(z)) close to 1.

The two steps are repeated sequentially. Once the training ends, the discriminator is

no longer able to discriminate between real and fake data and the generator becomes

a pro in creating data very similar to the training data. The stability between

discriminator and generator is an actively researched problem.

Now that you have got an idea of what GANs are, let's look at a practical application

of a GAN in which "handwritten" digits are generated.

MNIST using GAN in TensorFlow

Let us build a simple GAN capable of generating handwritten digits. We will use

the MNIST handwritten digits to train the network. We use the TensorFlow Keras

dataset to access the MNIST data. The data contains 60,000 training images of

handwritten digits each of size 28 × 28. The pixel value of the digits lies between

0-255; we normalize the input values such that each pixel has a value in range [-1, 1]:

(X_train, _), (_, _) = mnist.load_data()

X_train = (X_train.astype(np.float32) - 127.5)/127.5

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!