16.03.2021 Views

Advanced Deep Learning with Keras

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Chapter 8

Figure 8.1.7: The digits generated as a function of latent vector mean values (VAE MLP).

For ease of interpretation, the range of values for the mean is similar to Figure 8.1.6.

Using CNNs for VAEs

In the original paper Auto-encoding Variational Bayes [1], the VAE network was

implemented using MLP, which is similar to what we covered in the previous

section. In this section, we'll demonstrate that using a CNN will result in a significant

improvement in the quality of the digits produced and a remarkable reduction in the

number of parameters down to 134,165.

Listing 8.1.3 shows the encoder, decoder, and VAE network. This code was also

contributed to the official Keras GitHub repository. For conciseness, some lines

of code that are similar to the MLP are no longer shown. The encoder is made of two

layers of CNNs and two layers of MLPs in order to generate the latent code. The

encoder output structure is similar to the MLP implementation seen in the previous

section. The decoder is made up of one layer of MLP and three layers of transposed

CNNs. Figures 8.1.8 to 8.1.10 show the encoder, decoder, and VAE models. For VAE

CNN, RMSprop will result in a lower loss than Adam.

[ 249 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!