16.03.2021 Views

Advanced Deep Learning with Keras

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Autoencoders

plt.title('Input: 1st 2 rows, Decoded: last 2 rows')

plt.imshow(imgs, interpolation='none', cmap='gray')

plt.savefig('input_and_decoded.png')

plt.show()

Figure 3.2.1: The encoder model is a made up of Conv2D(32)-Conv2D(64)-Dense(16)

in order to generate the low dimensional latent vector

The decoder in Listing 3.2.1 decompresses the latent vector in order to recover the

MNIST digit. The decoder input stage is a Dense layer that will accept the latent

vector. The number of units is equal to the product of the saved Conv2D output

dimensions from the encoder. This is done so we can easily resize the output

of the Dense layer for Conv2DTranspose to finally recover the original MNIST

image dimensions.

The decoder is made of a stack of three Conv2DTranspose. In our case, we're going to

use a Transposed CNN (sometimes called deconvolution), which is more commonly

used in decoders. We can imagine transposed CNN (Conv2DTranspose) as the

reversed process of CNN. In a simple example, if the CNN converts an image to

feature maps, the transposed CNN will produce an image given feature maps. Figure

3.2.2 shows the decoder model.

[ 78 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!