16.03.2021 Views

Advanced Deep Learning with Keras

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Cross-Domain GANs

Figure 7.1.14 shows CycleGAN reconstructing MNIST digits in the forward cycle.

The reconstructed MNIST digits are almost identical with the source MNIST digits.

Figure 7.1.15 shows the CycleGAN reconstructing SVHN digits in the backward

cycle. Many target images are reconstructed. Some digits are clearly the same

such as the 2 nd row last 2 columns (3 and 4). While some are the same but blurred

like 1 st row first 2 columns (5 and 2). Some digits are transformed to another digit

although the style remains like 2 nd row first two columns (from 33 and 6 to 1 and

an unrecognizable digit).

On a personal note, I encourage you to run the image translation by using the

pretrained models of CycleGAN with PatchGAN:

python3 cyclegan-7.1.1.py --mnist_svhn_g_source=cyclegan_mnist_svhn-g_

source.h5 --mnist_svhn_g_target=cyclegan_mnist_svhn-g_target.h5

Conclusion

In this chapter, we've discussed CycleGAN as an algorithm that can be used for

image translation. In CycleGAN, the source and target data are not necessarily

aligned. We demonstrated two examples, grayscale ↔ color, and MNIST ↔ SVHN.

Though there are many other possible image translations that CycleGAN can

perform.

In the next chapter, we'll embark on another type of generative model, Variational

AutoEncoders (VAEs). VAEs have a similar objective of learning how to generate

new images (data). They focus on learning the latent vector modeled as a Gaussian

distribution. We'll demonstrate other similarities in the problem being addressed

by GANs in the form of conditional VAEs and the disentangling of latent

representations in VAEs.

[ 234 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!