09.05.2023 Views

pdfcoffee

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Autoencoders

Autoencoders are feed-forward, non-recurrent neural networks that learn by

unsupervised learning, also sometimes called semi-supervised learning, since the

input is treated as the target too. In this chapter, you will learn and implement

different variants of autoencoders and eventually learn how to stack autoencoders.

We will also see how autoencoders can be used to create MNIST digits, and

finally will also cover the steps involved in building an long short-term memory

autoencoder to generate sentence vectors. This chapter includes the following topics:

• Vanilla autoencoders

• Sparse autoencoders

• Denoising autoencoders

• Convolutional autoencoders

• Stacked autoencoders

• Generating sentences using LSTM autoencoders

Introduction to autoencoders

Autoencoders are a class of neural network that attempt to recreate the input

as their target using back-propagation. An autoencoder consists of two parts; an

encoder and a decoder. The encoder will read the input and compress it to a compact

representation, and the decoder will read the compact representation and recreate

the input from it. In other words, the autoencoder tries to learn the identity function

by minimizing the reconstruction error. They have an inherent capability to learn

a compact representation of data. They are at the center of deep belief networks

and find applications in image reconstruction, clustering, machine translation,

and much more.

[ 345 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!