16.03.2021 Views

Advanced Deep Learning with Keras

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Autoencoders

In the previous chapter, Chapter 2, Deep Neural Networks, you were introduced

to the concepts of deep neural networks. We're now going to move on to look

at autoencoders, which are a neural network architecture that attempts to find

a compressed representation of the given input data.

Similar to the previous chapters, the input data may be in multiple forms including,

speech, text, image, or video. An autoencoder will attempt to find a representation

or code in order to perform useful transformations on the input data. As an example,

in denoising autoencoders, a neural network will attempt to find a code that can

be used to transform noisy data into clean ones. Noisy data could be in the form

of an audio recording with static noise which is then converted into clear sound.

Autoencoders will learn the code automatically from the data alone without

human labeling. As such, autoencoders can be classified under unsupervised

learning algorithms.

In later chapters of this book, we will look at Generative Adversarial Networks

(GANs) and Variational Autoencoders (VAEs) which are also representative

forms of unsupervised learning algorithms. This is in contrast to the supervised

learning algorithms we discussed in the previous chapters where human

annotations were required.

In its simplest form, an autoencoder will learn the representation or code by

trying to copy the input to output. However, using an autoencoder is not as simple

as copying the input to output. Otherwise, the neural network would not be able

to uncover the hidden structure in the input distribution.

An autoencoder will encode the input distribution into a low-dimensional tensor,

which usually takes the form of a vector. This will approximate the hidden

structure that is commonly referred to as the latent representation, code, or vector.

This process constitutes the encoding part. The latent vector will then be decoded

by the decoder part to recover the original input.

[ 71 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!