pdfcoffee

soumyasankar99
from soumyasankar99 More from this publisher
09.05.2023 Views

Chapter 90.97905576229095460.98932331800460820.98694437742233280.96659982204437260.98932331800460820.9829331040382385A histogram of the distribution of values of cosine similarities for the sentencevectors from the first 500 sentences in the test set are shown below. As previously,it confirms that the sentence vectors generated from the input and output of theautoencoder are very similar, showing that the resulting sentence vector is a goodrepresentation of the sentence:SummaryIn this chapter we've had an extensive look at a new generation of deep learningmodels: autoencoders. We started with the Vanilla autoencoder, and then moved onto its variants: Sparse autoencoders, Denoising autoencoders, Stacked autoencoders,and Convolutional autoencoders. We used the autoencoders to reconstruct images,and we also demonstrated how they can be used to clean noise from an image.Finally, the chapter demonstrated how autoencoders can be used to generatesentence vectors. The autoencoders learned through unsupervised learning. In thenext chapter we will delve deeper into some other unsupervised learning-based deeplearning models.[ 373 ]

AutoencodersReferences1. Rumelhart, David E., Geoffrey E. Hinton, and Ronald J. Williams. LearningInternal Representations by Error Propagation. No. ICS-8506. CaliforniaUniv San Diego La Jolla Inst for Cognitive Science, 1985 (http://www.cs.toronto.edu/~fritz/absps/pdp8.pdf).2. Hinton, Geoffrey E., and Ruslan R. Salakhutdinov. Reducing the dimensionalityof data with neural networks. science 313.5786 (2006): 504-507. (https://www.semanticscholar.org/paper/Reducing-the-dimensionality-of-datawith-neural-Hinton-Salakhutdinov/46eb79e5eec8a4e2b2f5652b66441e8a4c921c3e)3. Masci, Jonathan, et al. Stacked convolutional auto-encoders for hierarchical featureextraction. Artificial Neural Networks and Machine Learning–ICANN 2011(2011): 52-59. (https://www.semanticscholar.org/paper/Reducing-thedimensionality-of-data-with-neural-Hinton-Salakhutdinov/46eb79e5eec8a4e2b2f5652b66441e8a4c921c3e)4. Japkowicz, Nathalie, Catherine Myers, and Mark Gluck. A novelty detectionapproach to classification. IJCAI. Vol. 1. 1995. (https://www.ijcai.org/Proceedings/95-1/Papers/068.pdf)5. AutoRec: Autoencoders Meet Collaborative Filtering, by S. Sedhain, Proceedingsof the 24th International Conference on World Wide Web, ACM, 2015.6. Wide & Deep Learning for Recommender Systems, by H. Cheng, Proceedings ofthe 1st Workshop on Deep Learning for Recommender Systems, ACM, 2016.7. Using Deep Learning to Remove Eyeglasses from Faces, by M. Runfeldt.8. Deep Patient: An Unsupervised Representation to Predict the Future of Patientsfrom the Electronic Health Records, by R. Miotto, Scientific Reports 6, 2016.9. Skip-Thought Vectors, by R. Kiros, Advances in Neural Information ProcessingSystems, 201510. http://web.engr.illinois.edu/~hanj/cs412/bk3/KL-divergence.pdf11. https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence12. https://cs.stanford.edu/people/karpathy/convnetjs/demo/autoencoder.html13. http://blackecho.github.io/blog/machine-learning/2016/02/29/denoising-autoencoder-tensorflow.html[ 374 ]

Chapter 9

0.9790557622909546

0.9893233180046082

0.9869443774223328

0.9665998220443726

0.9893233180046082

0.9829331040382385

A histogram of the distribution of values of cosine similarities for the sentence

vectors from the first 500 sentences in the test set are shown below. As previously,

it confirms that the sentence vectors generated from the input and output of the

autoencoder are very similar, showing that the resulting sentence vector is a good

representation of the sentence:

Summary

In this chapter we've had an extensive look at a new generation of deep learning

models: autoencoders. We started with the Vanilla autoencoder, and then moved on

to its variants: Sparse autoencoders, Denoising autoencoders, Stacked autoencoders,

and Convolutional autoencoders. We used the autoencoders to reconstruct images,

and we also demonstrated how they can be used to clean noise from an image.

Finally, the chapter demonstrated how autoencoders can be used to generate

sentence vectors. The autoencoders learned through unsupervised learning. In the

next chapter we will delve deeper into some other unsupervised learning-based deep

learning models.

[ 373 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!