16.03.2021 Views

Advanced Deep Learning with Keras

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Autoencoders

After training the autoencoder for one epoch with a validation loss of 0.01,

we're able to verify if it can encode and decode the MNIST data that it has not seen

before. Figure 3.2.4 shows us eight samples from the test data and the corresponding

decoded images. Except for minor blurring in the images, we're able to easily

recognize that the autoencoder is able to recover the input with good quality.

The results will improve as we train for a larger number of epochs.

Figure 3.2.4: Prediction of the autoencoder from the test data.

The first 2 rows are the original input test data. The last 2 rows are the predicted data.

At this point, we may be wondering how we can visualize the latent vector in space.

A simple method for visualization is to force the autoencoder to learn the MNIST

digits features using a 2-dim latent vector. From there, we're able to project this latent

vector on a 2D space in order to see how the MNIST codes are distributed. By setting

the latent_dim = 2 in autoencoder-mnist-3.2.1.py code and by using the plot_

results() to plot the MNIST digit as a function of the 2-dim latent vector, Figure

3.2.5 and Figure 3.2.6 shows the distribution of MNIST digits as a function of latent

codes. These figures were generated after 20 epochs of training. For convenience,

the program is saved as autoencoder-2dim-mnist-3.2.2.py with the partial code

shown in Listing 3.2.2.

Following is Listing 3.2.2, autoencoder-2dim-mnist-3.2.2.py, which shows the

function for visualization of the MNIST digits distribution over 2-dim latent codes.

The rest of the code is practically similar to Listing 3.2.1 and no longer shown here.

def plot_results(models,

data,

[ 80 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!