09.05.2023 Views

pdfcoffee

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Chapter 6

Figure 7: The architecture of the InfoGAN, visualized

The concatenated vector (Z,c) is fed to the generator. Q(c|X) is also a neural network.

Combined with the generator it works to form a mapping between random noise Z

and its latent code c_hat. It aims to estimate c given X. This is achieved by adding a

regularization term to the objective function of the conventional GAN:

mmmmmm DD mmmmmm GG VV 1 (DD, GG) = VV GG (DD, GG) − λλ1(c; G(Z, c))

The term V G

(D,G) is the loss function of the conventional GAN, and the second term

is the regularization term, where λλ is a constant. Its value was set to 1 in the paper,

and I(c;G(Z,c)) is the mutual information between the latent code c and the generatorgenerated

image G(Z,c).

Following are the exciting results of the InfoGAN on the MNIST dataset:

Figure 8: Results of using the InfoGAN on the MNIST dataset

[ 213 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!