09.05.2023 Views

pdfcoffee

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Chapter 9

plt.imshow(x_test[index].reshape(28, 28), cmap='gray')

ax.get_xaxis().set_visible(False)

ax.get_yaxis().set_visible(False)

# display reconstruction

ax = plt.subplot(2, number, index + 1 + number)

plt.imshow(autoencoder(x_test)[index].numpy().reshape(28, 28),

cmap='gray')

ax.get_xaxis().set_visible(False)

ax.get_yaxis().set_visible(False)

plt.show()

It is interesting to note that in the preceding code we reduced the dimensions of the

input from 784 to 128 and our network could still reconstruct the original image. This

should give you an idea of the power of the autoencoder for dimensionality reduction.

One advantage of autoencoders over PCA for dimensionality reduction is that while

PCA can only represent linear transformations, we can use non-linear activation

functions in autoencoders, thus introducing non-linearities in our encodings:

Figure 3: Comparison between the result of a PCA

The preceding figure is reproduced from the Hinton paper Reducing the dimensionality

of data with Neural Networks. It compares the result of a PCA (A) with that of stacked

autoencoders with architecture consisting of 784-1000-500-250-2.

[ 353 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!