16.03.2021 Views

Advanced Deep Learning with Keras

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Variational Autoencoders (VAEs)

Preceding figure shows the continuous latent space of a VAE using the CNN

implementation after 30 epochs. The region where each digit is assigned may

be different, but the distribution is roughly the same. Following figure shows

us the output of the generative model. Qualitatively, there are fewer digits that

are ambiguous as compared to Figure 8.1.7 with the MLP implementation:

Figure 8.1.12: The digits generated as a function of latent vector mean values (VAE CNN).

For ease of interpretation, the range of values for the mean is similar to Figure 8.1.11.

Conditional VAE (CVAE)

Conditional VAE [2] is similar to the idea of CGAN. In the context of the MNIST

dataset, if the latent space is randomly sampled, VAE has no control over which

digit will be generated. CVAE is able to address this problem by including

a condition (a one-hot label) of the digit to produce. The condition is imposed

on both the encoder and decoder inputs.

Formally, the core equation of VAE in Equation 8.1.10 is modified to include the

condition c:

[ 254 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!