pdfcoffee
Chapter 90.97905576229095460.98932331800460820.98694437742233280.96659982204437260.98932331800460820.9829331040382385A histogram of the distribution of values of cosine similarities for the sentencevectors from the first 500 sentences in the test set are shown below. As previously,it confirms that the sentence vectors generated from the input and output of theautoencoder are very similar, showing that the resulting sentence vector is a goodrepresentation of the sentence:SummaryIn this chapter we've had an extensive look at a new generation of deep learningmodels: autoencoders. We started with the Vanilla autoencoder, and then moved onto its variants: Sparse autoencoders, Denoising autoencoders, Stacked autoencoders,and Convolutional autoencoders. We used the autoencoders to reconstruct images,and we also demonstrated how they can be used to clean noise from an image.Finally, the chapter demonstrated how autoencoders can be used to generatesentence vectors. The autoencoders learned through unsupervised learning. In thenext chapter we will delve deeper into some other unsupervised learning-based deeplearning models.[ 373 ]
AutoencodersReferences1. Rumelhart, David E., Geoffrey E. Hinton, and Ronald J. Williams. LearningInternal Representations by Error Propagation. No. ICS-8506. CaliforniaUniv San Diego La Jolla Inst for Cognitive Science, 1985 (http://www.cs.toronto.edu/~fritz/absps/pdp8.pdf).2. Hinton, Geoffrey E., and Ruslan R. Salakhutdinov. Reducing the dimensionalityof data with neural networks. science 313.5786 (2006): 504-507. (https://www.semanticscholar.org/paper/Reducing-the-dimensionality-of-datawith-neural-Hinton-Salakhutdinov/46eb79e5eec8a4e2b2f5652b66441e8a4c921c3e)3. Masci, Jonathan, et al. Stacked convolutional auto-encoders for hierarchical featureextraction. Artificial Neural Networks and Machine Learning–ICANN 2011(2011): 52-59. (https://www.semanticscholar.org/paper/Reducing-thedimensionality-of-data-with-neural-Hinton-Salakhutdinov/46eb79e5eec8a4e2b2f5652b66441e8a4c921c3e)4. Japkowicz, Nathalie, Catherine Myers, and Mark Gluck. A novelty detectionapproach to classification. IJCAI. Vol. 1. 1995. (https://www.ijcai.org/Proceedings/95-1/Papers/068.pdf)5. AutoRec: Autoencoders Meet Collaborative Filtering, by S. Sedhain, Proceedingsof the 24th International Conference on World Wide Web, ACM, 2015.6. Wide & Deep Learning for Recommender Systems, by H. Cheng, Proceedings ofthe 1st Workshop on Deep Learning for Recommender Systems, ACM, 2016.7. Using Deep Learning to Remove Eyeglasses from Faces, by M. Runfeldt.8. Deep Patient: An Unsupervised Representation to Predict the Future of Patientsfrom the Electronic Health Records, by R. Miotto, Scientific Reports 6, 2016.9. Skip-Thought Vectors, by R. Kiros, Advances in Neural Information ProcessingSystems, 201510. http://web.engr.illinois.edu/~hanj/cs412/bk3/KL-divergence.pdf11. https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence12. https://cs.stanford.edu/people/karpathy/convnetjs/demo/autoencoder.html13. http://blackecho.github.io/blog/machine-learning/2016/02/29/denoising-autoencoder-tensorflow.html[ 374 ]
- Page 357 and 358: Recurrent Neural Networksself.embed
- Page 359 and 360: Recurrent Neural NetworksThis is a
- Page 361 and 362: Recurrent Neural Networksreturn np.
- Page 363 and 364: Recurrent Neural NetworksAttention
- Page 365 and 366: Recurrent Neural NetworksFinally, V
- Page 367 and 368: Recurrent Neural Networks# query.sh
- Page 369 and 370: Recurrent Neural Networksself.atten
- Page 371 and 372: Recurrent Neural Networks30 try to
- Page 373 and 374: Recurrent Neural Networks3. Because
- Page 375 and 376: Recurrent Neural NetworksSummaryIn
- Page 377 and 378: Recurrent Neural Networks18. Shi, X
- Page 380 and 381: AutoencodersAutoencoders are feed-f
- Page 382 and 383: Depending upon the actual dimension
- Page 384 and 385: • __init__(): Here, you define al
- Page 386 and 387: Chapter 9And then we reshape the te
- Page 388 and 389: Chapter 9plt.imshow(x_test[index].r
- Page 390 and 391: Chapter 9Keeping the rest of the co
- Page 392 and 393: noise = np.random.normal(loc=0.5, s
- Page 394 and 395: Chapter 9x_train,validation_data=(x
- Page 396 and 397: Chapter 9import matplotlib.pyplot a
- Page 398 and 399: Chapter 9self.conv4 = Conv2D(1, 3,
- Page 400 and 401: Chapter 9You can see that the image
- Page 402 and 403: [ 367 ]Chapter 9Let us use the prec
- Page 404 and 405: Chapter 9Our autoencoder model take
- Page 406 and 407: We train the autoencoder for 20 epo
- Page 410 and 411: Unsupervised LearningThis chapter d
- Page 412 and 413: Chapter 10Next we load the MNIST da
- Page 414 and 415: Chapter 10TensorFlow Embedding APIT
- Page 416 and 417: 3. Recompute the centroids using cu
- Page 418 and 419: Chapter 10Figure 4: Plot of the fin
- Page 420 and 421: Chapter 10In SOMs, neurons are usua
- Page 422 and 423: [ 387 ]Chapter 10Colour mapping usi
- Page 424 and 425: Chapter 10# Calculating Neighbourho
- Page 426 and 427: We will also need to normalize the
- Page 428 and 429: Chapter 10ρρ(vv oo |h oo ) = σσ
- Page 430 and 431: # Generate the sample probabilityde
- Page 432 and 433: Chapter 10And the reconstructed ima
- Page 434 and 435: Chapter 10inpX = rbm.rbm_output(inp
- Page 436 and 437: Chapter 10(60000, 28, 28) (60000,)(
- Page 438 and 439: Chapter 10Figure 11: Summary of the
- Page 440 and 441: Chapter 10This chapter, along with
- Page 442 and 443: Reinforcement LearningThis chapter
- Page 444 and 445: Chapter 11And unlike unsupervised l
- Page 446 and 447: Chapter 11Normally, the value is de
- Page 448 and 449: Chapter 11• The next question tha
- Page 450 and 451: Chapter 11This neural network takes
- Page 452 and 453: Chapter 11The MuJoCo environment re
- Page 454 and 455: Chapter 11We will first import the
- Page 456 and 457: Chapter 11The αα is the learning
Chapter 9
0.9790557622909546
0.9893233180046082
0.9869443774223328
0.9665998220443726
0.9893233180046082
0.9829331040382385
A histogram of the distribution of values of cosine similarities for the sentence
vectors from the first 500 sentences in the test set are shown below. As previously,
it confirms that the sentence vectors generated from the input and output of the
autoencoder are very similar, showing that the resulting sentence vector is a good
representation of the sentence:
Summary
In this chapter we've had an extensive look at a new generation of deep learning
models: autoencoders. We started with the Vanilla autoencoder, and then moved on
to its variants: Sparse autoencoders, Denoising autoencoders, Stacked autoencoders,
and Convolutional autoencoders. We used the autoencoders to reconstruct images,
and we also demonstrated how they can be used to clean noise from an image.
Finally, the chapter demonstrated how autoencoders can be used to generate
sentence vectors. The autoencoders learned through unsupervised learning. In the
next chapter we will delve deeper into some other unsupervised learning-based deep
learning models.
[ 373 ]