16.03.2021 Views

Advanced Deep Learning with Keras

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Variational Autoencoders (VAEs)

The Keras code for β -VAE has pre-trained weights. To test β -VAE with β = 7

generating digit 0, we need to run:

$ python3 cvae-cnn-mnist-8.2.1.py --beta=7 --weights=beta-cvae_cnn_mnist.

h5 --digit=0

Conclusion

In this chapter, we've covered the principles of variational autoencoders (VAEs).

As we learned in the principles of VAEs, they bear a resemblance to GANs in the

aspect of both attempt to create synthetic outputs from latent space. However, it can

be noticed that the VAE networks are much simpler and easier to train compared to

GANs. It's becoming clear how conditional VAE and β -VAE are similar in concept to

conditional GAN and disentangled representation GAN respectively.

VAEs have an intrinsic mechanism to disentangle the latent vectors. Therefore,

building a β -VAE is straightforward. We should note however that interpretable

and disentangled codes are important in building intelligent agents.

In the next chapter, we're going to focus on Reinforcement learning. Without any

prior data, an agent learns by interacting with its world. We'll discuss how the

agent can be rewarded for correct actions and punished for the wrong ones.

References

1. Diederik P. Kingma and Max Welling. Auto-encoding Variational Bayes. arXiv

preprint arXiv:1312.6114, 2013(https://arxiv.org/pdf/1312.6114.pdf).

2. Kihyuk Sohn, Honglak Lee, and Xinchen Yan. Learning Structured Output

Representation Using Deep Conditional Generative Models. Advances in

Neural Information Processing Systems, 2015(http://papers.nips.cc/

paper/5775-learning-structured-output-representation-usingdeep-conditional-generative-models.pdf).

3. Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation Learning:

A Review and New Perspectives. IEEE transactions on Pattern Analysis

and Machine Intelligence 35.8, 2013: 1798-1828(https://arxiv.org/

pdf/1206.5538.pdf).

4. Xi Chen and others. Infogan: Interpretable Representation Learning by Information

Maximizing Generative Adversarial Nets. Advances in Neural Information

Processing Systems, 2016(http://papers.nips.cc/paper/6399-

infogan-interpretable-representation-learning-by-informationmaximizing-generative-adversarial-nets.pdf).

[ 268 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!