Advanced Deep Learning with Keras

fourpersent2020
from fourpersent2020 More from this publisher
16.03.2021 Views

Chapter 4Figure 4.3.4: The fake images generated by CGAN at different training steps whenconditioned with labels [0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5]You're encouraged to run the trained generator model to see new synthesizedMNIST digits images:python3 cgan-mnist-4.3.1.py --generator=cgan_mnist.h5Alternatively, a specific digit (for example, 8) to be generated can also be requested:cgan-mnist-4.3.1.py --generator=cgan_mnist.h5 --digit=8With CGAN it's like having an agent that we can ask to draw digits similarto how humans write digits. The key advantage of CGAN over DCGAN is thatwe can specify which digit we want the agent to draw.[ 123 ]

Generative Adversarial Networks (GANs)ConclusionThis chapter discussed the general principles behind GANs, to give you a foundationto the more advanced topics we'll now move on to, including Improved GANs,Disentangled Representations GANs, and Cross-Doman GANs. We started thischapter by understanding how GANs are made up of two networks called generatorand discriminator. The role of the discriminator is to discriminate between realand fake signals. The aim of the generator is to fool the discriminator. The generatoris normally combined with the discriminator to form an adversarial network. It isthrough training the adversarial network that the generator learns how to producefake signals that can trick the discriminator.We also learned how GANs are easy to build but notoriously difficult to train.Two example implementations in Keras were presented. DCGAN demonstratedthat it is possible to train GANs to generate fake images using deep CNNs. Thefake images are MNIST digits. However, the DCGAN generator has no control overwhich specific digit it should draw. CGAN addressed this problem by conditioningthe generator to draw a specific digit. The condition is in the form of a one-hot label.CGAN is useful if we want to build an agent that can generate data of a specific class.In the next chapter, improvements on the DCGAN and CGAN will be introduced.In particular, the focus is on how to stabilize the training of DCGAN and how toimprove the perceptive quality of CGAN. This will be done by introducing newloss functions and slightly different model architectures.References1. Ian Goodfellow. NIPS 2016 Tutorial: Generative Adversarial Networks. arXivpreprint arXiv:1701.00160, 2016 (https://arxiv.org/pdf/1701.00160.pdf).2. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised RepresentationLearning with Deep Convolutional Generative Adversarial Networks. arXivpreprint arXiv:1511.06434, 2015 (https://arxiv.org/pdf/1511.06434.pdf).3. Mehdi Mirza and Simon Osindero. Conditional Generative Adversarial Nets.arXiv preprint arXiv:1411.1784, 2014 (https://arxiv.org/pdf/1411.1784.pdf).4. Tero Karras and others. Progressive Growing of GANs for Improved Quality,Stability, and Variation. ICLR, 2018 (https://arxiv.org/pdf/1710.10196.pdf).[ 124 ]

Generative Adversarial Networks (GANs)

Conclusion

This chapter discussed the general principles behind GANs, to give you a foundation

to the more advanced topics we'll now move on to, including Improved GANs,

Disentangled Representations GANs, and Cross-Doman GANs. We started this

chapter by understanding how GANs are made up of two networks called generator

and discriminator. The role of the discriminator is to discriminate between real

and fake signals. The aim of the generator is to fool the discriminator. The generator

is normally combined with the discriminator to form an adversarial network. It is

through training the adversarial network that the generator learns how to produce

fake signals that can trick the discriminator.

We also learned how GANs are easy to build but notoriously difficult to train.

Two example implementations in Keras were presented. DCGAN demonstrated

that it is possible to train GANs to generate fake images using deep CNNs. The

fake images are MNIST digits. However, the DCGAN generator has no control over

which specific digit it should draw. CGAN addressed this problem by conditioning

the generator to draw a specific digit. The condition is in the form of a one-hot label.

CGAN is useful if we want to build an agent that can generate data of a specific class.

In the next chapter, improvements on the DCGAN and CGAN will be introduced.

In particular, the focus is on how to stabilize the training of DCGAN and how to

improve the perceptive quality of CGAN. This will be done by introducing new

loss functions and slightly different model architectures.

References

1. Ian Goodfellow. NIPS 2016 Tutorial: Generative Adversarial Networks. arXiv

preprint arXiv:1701.00160, 2016 (https://arxiv.org/pdf/1701.00160.

pdf).

2. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised Representation

Learning with Deep Convolutional Generative Adversarial Networks. arXiv

preprint arXiv:1511.06434, 2015 (https://arxiv.org/pdf/1511.06434.

pdf).

3. Mehdi Mirza and Simon Osindero. Conditional Generative Adversarial Nets.

arXiv preprint arXiv:1411.1784, 2014 (https://arxiv.org/pdf/1411.1784.

pdf).

4. Tero Karras and others. Progressive Growing of GANs for Improved Quality,

Stability, and Variation. ICLR, 2018 (https://arxiv.org/pdf/1710.10196.

pdf).

[ 124 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!