16.03.2021 Views

Advanced Deep Learning with Keras

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Chapter 3

Firstly, we're going to implement the autoencoder by building the encoder.

Listing 3.2.1 shows the encoder that compresses the MNIST digit into a 16-dim latent

vector. The encoder is a stack of two Conv2D. The final stage is a Dense layer with

16 units to generate the latent vector. Figure 3.2.1 shows the architecture model

diagram generated by plot_model() which is the same as the text version produced

by encoder.summary(). The shape of the output of the last Conv2D is saved to

compute the dimensions of the decoder input layer for easy reconstruction of the

MNIST image.

The following Listing 3.2.1, shows autoencoder-mnist-3.2.1.py. This

is an autoencoder implementation using Keras. The latent vector is 16-dim:

from keras.layers import Dense, Input

from keras.layers import Conv2D, Flatten

from keras.layers import Reshape, Conv2DTranspose

from keras.models import Model

from keras.datasets import mnist

from keras.utils import plot_model

from keras import backend as K

import numpy as np

import matplotlib.pyplot as plt

# load MNIST dataset

(x_train, _), (x_test, _) = mnist.load_data()

# reshape to (28, 28, 1) and normalize input images

image_size = x_train.shape[1]

x_train = np.reshape(x_train, [-1, image_size, image_size, 1])

x_test = np.reshape(x_test, [-1, image_size, image_size, 1])

x_train = x_train.astype('float32') / 255

x_test = x_test.astype('float32') / 255

# network parameters

input_shape = (image_size, image_size, 1)

batch_size = 32

kernel_size = 3

latent_dim = 16

# encoder/decoder number of filters per CNN layer

layer_filters = [32, 64]

[ 75 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!