09.05.2023 Views

pdfcoffee

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Preface

The chapter will then cover various extensions to the basic embedding approach,

such as using character trigrams instead of words (fastText), retaining word context

by replacing static embeddings with a neural network (ELMO, Google Universal

Sentence Encoder), sentence embeddings (InferSent, SkipThoughts), and using

pretrained language models for embeddings (ULMFit, BERT).

Chapter 8, Recurrent Neural Networks, this chapter describes the basic architecture of

Recurrent Neural Networks (RNNs), and how it is well suited for sequence learning

tasks such as those found in NLP. It will cover various types of RNN, LSTM, Gated

Recurrent Unit (GRU), Peephole LSTM, and bidirectional LSTM. It will go into more

depth as to how an RNN can be used as a language model. It will then cover the

seq2seq model, a type of RNN-based encoder-decoder architecture originally used in

machine translation. It will then cover Attention mechanisms as a way of enhancing

the performance of seq2seq architectures, and finally will cover the Transformer

architecture (BERT, GPT-2), which is based on the Attention is all you need paper.

Chapter 9, Autoencoders, this chapter will describe autoencoders, a class of neural

networks that attempt to recreate the input as its target. It will cover different

varieties of autoencoders like sparse autoencoders, convolutional autoencoders,

and denoising autoencoders. The chapter will train a denoising autoencoder to

remove noise from input images. It will demonstrate how autoencoders can be used

to create MNIST digits. Finally, it will also cover the steps involved in building an

LSTM autoencoder to generate sentence vectors.

Chapter 10, Unsupervised Learning, the chapter delves into the unsupervised learning

models. It will cover techniques required for clustering and dimensionality reduction

like PCA, k-means, and self-organized maps. It will go into the details of Boltzmann

Machines and their implementation using TensorFlow. The concepts covered will be

extended to build Restricted Boltzmann Machines (RBMs).

Chapter 11, Reinforcement Learning, this chapter will focus upon reinforcement

learning. It will start with the Q-learning algorithm. Starting with the Bellman Ford

equation, the chapter will cover concepts like discounted rewards, exploration and

exploitation, and discount factors. It will explain policy-based and model-based

reinforcement learning. Finally, a Deep Q-learning Network (DQN) will be built

to play an Atari game.

Chapter 12, TensorFlow and Cloud, this chapter introduces you to the exciting field of

AutoML. It talks about automatic data preparation, automatic feature engineering,

and automatic model generation. The chapter also introduces AutoKeras and

Google Cloud Platform AutoML with its multiple solutions for Table, Vision, Text,

Translation, and for Video processing.

[ xx ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!