16.03.2021 Views

Advanced Deep Learning with Keras

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Chapter 3

Conclusion

In this chapter, we've been introduced to autoencoders, which are neural

networks that compress input data into low-dimensional codes in order to

efficiently perform structural transformations such as denoising and colorization.

We've laid the foundations to the more advanced topics of GANs and VAEs, that

we will introduce in later chapters, while still exploring how autoencoders can

utilize Keras. We've demonstrated how to implement an autoencoder from two

building block models, both encoder and decoder. We've also learned how the

extraction of a hidden structure of input distribution is one of the common tasks

in AI.

Once the latent code has been uncovered, there are many structural operations

that can be performed on the original input distribution. In order to gain a better

understanding of the input distribution, the hidden structure in the form of the

latent vector can be visualized using low-level embedding similar to what we did

in this chapter or through more sophisticated dimensionality reduction techniques

such t-SNE or PCA.

Apart from denoising and colorization, autoencoders are used in converting

input distribution to low-dimensional latent codes that can be further processed

for other tasks such as segmentation, detection, tracking, reconstruction, visual

understanding, and so on. In Chapter 8, Variational Autoencoders (VAEs), we will

discuss VAEs which are structurally the same as autoencoder but differ by having an

interpretable latent code that can produce a continuous latent codes projection. In the

next chapter, we will embark on one of the most important recent breakthroughs in

AI, the introduction of GANs where we will learn of the core strengths of GANs and

their ability to synthesize data or signals that look real.

References

1. Ian Goodfellow and others. Deep learning. Vol. 1. Cambridge: MIT press,

2016 (http://www.deeplearningbook.org/).

[ 97 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!