16.03.2021 Views

Advanced Deep Learning with Keras

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Variational Autoencoders (VAEs)

For example, it was previously [6] demonstrated that an isotropic

Gaussian could be morphed into a ring-shaped distribution using

the function g ( z) = z + z

10

.

z

Readers can further explore the theory as presented in Luc Devroye's,

Sample-Based Non-Uniform Random Variate Generation [7].

In summary, the VAE loss function is defined as:

LVAE = LR + L (Equation 8.1.12)

KL

Reparameterization trick

Figure 8.1.1: A VAE network with and without the reparameterization trick

On the left side of the preceding figure shows the VAE network. The encoder takes

the input x, and estimates the mean, µ , and the standard deviation, σ , of the

multivariate Gaussian distribution of the latent vector z. The decoder takes samples

from the latent vector z to reconstruct the input as x̃ . This seems straightforward

until the gradient updates happen during backpropagation.

Backpropagation gradients will not pass through the stochastic Sampling block.

While it's fine to have stochastic inputs for neural networks, it's not possible for the

gradients to go through a stochastic layer.

[ 242 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!