16.03.2021 Views

Advanced Deep Learning with Keras

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Variational Autoencoders

(VAEs)

Similar to Generative Adversarial Networks (GANs) that we've discussed in

the previous chapters, Variational Autoencoders (VAEs) [1] belong to the family

of generative models. The generator of VAE is able to produce meaningful outputs

while navigating its continuous latent space. The possible attributes of the decoder

outputs are explored through the latent vector.

In GANs, the focus is on how to arrive at a model that approximates the input

distribution. VAEs attempt to model the input distribution from a decodable

continuous latent space. This is one of the possible underlying reasons why

GANs are able to generate more realistic signals when compared to VAEs. For

example, in image generation, GANs are able to produce more realistic looking

images while VAEs in comparison generate images that are less sharp.

Within VAEs, the focus is on the variational inference of latent codes.

Therefore, VAEs provide a suitable framework for both learning and efficient

Bayesian inference with latent variables. For example, VAEs with disentangled

representations enable latent code reuse for transfer learning.

In terms of structure, VAEs bear a resemblance to an autoencoder. They are

also made up of an encoder (also known as recognition or inference model)

and a decoder (also known as a generative model). Both VAEs and autoencoders

attempt to reconstruct the input data while learning the latent vector. However,

unlike autoencoders, the latent space of VAEs is continuous, and the decoder itself

is used as a generative model.

[ 237 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!