Advanced Deep Learning with Keras

fourpersent2020
from fourpersent2020 More from this publisher
16.03.2021 Views

Chapter 8Figure 8.3.3: Digits 0 to 3 generated as a function of latent vector mean values and one-hot label( β -VAE β = 1, 7 and10). For ease of interpretation, the range of values for the meanis similar to Figure 8.3.1.[ 267 ]

Variational Autoencoders (VAEs)The Keras code for β -VAE has pre-trained weights. To test β -VAE with β = 7generating digit 0, we need to run:$ python3 cvae-cnn-mnist-8.2.1.py --beta=7 --weights=beta-cvae_cnn_mnist.h5 --digit=0ConclusionIn this chapter, we've covered the principles of variational autoencoders (VAEs).As we learned in the principles of VAEs, they bear a resemblance to GANs in theaspect of both attempt to create synthetic outputs from latent space. However, it canbe noticed that the VAE networks are much simpler and easier to train compared toGANs. It's becoming clear how conditional VAE and β -VAE are similar in concept toconditional GAN and disentangled representation GAN respectively.VAEs have an intrinsic mechanism to disentangle the latent vectors. Therefore,building a β -VAE is straightforward. We should note however that interpretableand disentangled codes are important in building intelligent agents.In the next chapter, we're going to focus on Reinforcement learning. Without anyprior data, an agent learns by interacting with its world. We'll discuss how theagent can be rewarded for correct actions and punished for the wrong ones.References1. Diederik P. Kingma and Max Welling. Auto-encoding Variational Bayes. arXivpreprint arXiv:1312.6114, 2013(https://arxiv.org/pdf/1312.6114.pdf).2. Kihyuk Sohn, Honglak Lee, and Xinchen Yan. Learning Structured OutputRepresentation Using Deep Conditional Generative Models. Advances inNeural Information Processing Systems, 2015(http://papers.nips.cc/paper/5775-learning-structured-output-representation-usingdeep-conditional-generative-models.pdf).3. Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation Learning:A Review and New Perspectives. IEEE transactions on Pattern Analysisand Machine Intelligence 35.8, 2013: 1798-1828(https://arxiv.org/pdf/1206.5538.pdf).4. Xi Chen and others. Infogan: Interpretable Representation Learning by InformationMaximizing Generative Adversarial Nets. Advances in Neural InformationProcessing Systems, 2016(http://papers.nips.cc/paper/6399-infogan-interpretable-representation-learning-by-informationmaximizing-generative-adversarial-nets.pdf).[ 268 ]

Chapter 8

Figure 8.3.3: Digits 0 to 3 generated as a function of latent vector mean values and one-hot label

( β -VAE β = 1, 7 and10

). For ease of interpretation, the range of values for the mean

is similar to Figure 8.3.1.

[ 267 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!