Advanced Deep Learning with Keras
Chapter 8Figure 8.3.3: Digits 0 to 3 generated as a function of latent vector mean values and one-hot label( β -VAE β = 1, 7 and10). For ease of interpretation, the range of values for the meanis similar to Figure 8.3.1.[ 267 ]
Variational Autoencoders (VAEs)The Keras code for β -VAE has pre-trained weights. To test β -VAE with β = 7generating digit 0, we need to run:$ python3 cvae-cnn-mnist-8.2.1.py --beta=7 --weights=beta-cvae_cnn_mnist.h5 --digit=0ConclusionIn this chapter, we've covered the principles of variational autoencoders (VAEs).As we learned in the principles of VAEs, they bear a resemblance to GANs in theaspect of both attempt to create synthetic outputs from latent space. However, it canbe noticed that the VAE networks are much simpler and easier to train compared toGANs. It's becoming clear how conditional VAE and β -VAE are similar in concept toconditional GAN and disentangled representation GAN respectively.VAEs have an intrinsic mechanism to disentangle the latent vectors. Therefore,building a β -VAE is straightforward. We should note however that interpretableand disentangled codes are important in building intelligent agents.In the next chapter, we're going to focus on Reinforcement learning. Without anyprior data, an agent learns by interacting with its world. We'll discuss how theagent can be rewarded for correct actions and punished for the wrong ones.References1. Diederik P. Kingma and Max Welling. Auto-encoding Variational Bayes. arXivpreprint arXiv:1312.6114, 2013(https://arxiv.org/pdf/1312.6114.pdf).2. Kihyuk Sohn, Honglak Lee, and Xinchen Yan. Learning Structured OutputRepresentation Using Deep Conditional Generative Models. Advances inNeural Information Processing Systems, 2015(http://papers.nips.cc/paper/5775-learning-structured-output-representation-usingdeep-conditional-generative-models.pdf).3. Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation Learning:A Review and New Perspectives. IEEE transactions on Pattern Analysisand Machine Intelligence 35.8, 2013: 1798-1828(https://arxiv.org/pdf/1206.5538.pdf).4. Xi Chen and others. Infogan: Interpretable Representation Learning by InformationMaximizing Generative Adversarial Nets. Advances in Neural InformationProcessing Systems, 2016(http://papers.nips.cc/paper/6399-infogan-interpretable-representation-learning-by-informationmaximizing-generative-adversarial-nets.pdf).[ 268 ]
- Page 234 and 235: Listing 7.1.3, cyclegan-7.1.1.py sh
- Page 236 and 237: Chapter 71) Build target and source
- Page 238 and 239: Chapter 7preal_target,reco_source,r
- Page 240 and 241: size=batch_size)real_source = sourc
- Page 242 and 243: Chapter 7returndirs=dirs,show=True)
- Page 244 and 245: Chapter 7Figure 7.1.10: Color (from
- Page 246 and 247: [ 229 ]Chapter 7titles = ('MNIST pr
- Page 248 and 249: Chapter 7Figure 7.1.13: Style trans
- Page 250 and 251: Chapter 7Figure 7.1.15: The backwar
- Page 252: Chapter 7References1. Yuval Netzer
- Page 255 and 256: Variational Autoencoders (VAEs)In t
- Page 257 and 258: Variational Autoencoders (VAEs)Typi
- Page 259 and 260: Variational Autoencoders (VAEs)For
- Page 261 and 262: Variational Autoencoders (VAEs)VAEs
- Page 263 and 264: Variational Autoencoders (VAEs)outp
- Page 265 and 266: Variational Autoencoders (VAEs)Figu
- Page 267 and 268: Variational Autoencoders (VAEs)The
- Page 269 and 270: Variational Autoencoders (VAEs)Figu
- Page 271 and 272: Variational Autoencoders (VAEs)Prec
- Page 273 and 274: Variational Autoencoders (VAEs)shap
- Page 275 and 276: Variational Autoencoders (VAEs)cvae
- Page 277 and 278: Variational Autoencoders (VAEs)Figu
- Page 279 and 280: Variational Autoencoders (VAEs)Figu
- Page 281 and 282: Variational Autoencoders (VAEs)In F
- Page 283: Variational Autoencoders (VAEs)Figu
- Page 288 and 289: Deep ReinforcementLearningReinforce
- Page 290 and 291: [ 273 ]Chapter 9Formally, the RL pr
- Page 292 and 293: Chapter 9Where:( ) ( , )∗V s maxQ
- Page 294 and 295: Chapter 9Initially, the agent assum
- Page 296 and 297: Chapter 9Figure 9.3.6: Assuming the
- Page 298 and 299: Q-Learning in PythonThe environment
- Page 300 and 301: Chapter 9----------------"""self.re
- Page 302 and 303: Chapter 9# UI to dump Q Table conte
- Page 304 and 305: Chapter 9Figure 9.3.10: The value f
- Page 306 and 307: Chapter 9Figure 9.5.1: Frozen lake
- Page 308 and 309: Chapter 9# discount factorself.gamm
- Page 310 and 311: Chapter 9# training of Q Tableif do
- Page 312 and 313: Chapter 9Where all terms are famili
- Page 314 and 315: Listing 9.6.1 shows us the DQN impl
- Page 316 and 317: Chapter 9if self.ddqn:print("------
- Page 318 and 319: Chapter 9updates# correction on the
- Page 320 and 321: QmaxChapter 9⎧rj+1if episodetermi
- Page 322: References1. Sutton and Barto. Rein
- Page 325 and 326: Policy Gradient MethodsPolicy gradi
- Page 327 and 328: Policy Gradient MethodsGiven a cont
- Page 329 and 330: Policy Gradient MethodsRequire: Dis
- Page 331 and 332: Policy Gradient MethodsRequire: Dis
- Page 333 and 334: Policy Gradient MethodsRequire: Dis
Chapter 8
Figure 8.3.3: Digits 0 to 3 generated as a function of latent vector mean values and one-hot label
( β -VAE β = 1, 7 and10
). For ease of interpretation, the range of values for the mean
is similar to Figure 8.3.1.
[ 267 ]