16.03.2021 Views

Advanced Deep Learning with Keras

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Chapter 3

Figure 3.2.2: The decoder model is made of a Dense(16)-Conv2DTranspose(64) -Conv2DTranspose(32)-

Conv2DTranspose(1). The input is the latent vector decoded to recover the original input.

By joining the encoder and decoder together, we're able to build the autoencoder.

Figure 3.2.3 illustrates the model diagram of the autoencoder. The tensor output

of the encoder is also the input to a decoder which generates the output of the

autoencoder. In this example, we'll be using the MSE loss function and Adam

optimizer. During training, the input is the same as the output, x_train. We

should note that in our example, there are only a few layers which are sufficient

enough to drive the validation loss to 0.01 in one epoch. For more complex datasets,

you may need a deeper encoder, decoder as well as more epochs of training.

Figure 3.2.3: The autoencoder model is built by joining an encoder model and

a decoder model together. There are 178k parameters for this autoencoder.

[ 79 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!