22.02.2024 Views

Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Output

tensor(0.3504, device='cuda:0')

Indeed, they have an even lower similarity now.

For more details on Transformer word embeddings, please check

"Transformer Embeddings." [197]

In the "Pre-trained PyTorch Embeddings" section we averaged (classical) word

embeddings to get a single vector for each sentence. We could do the same thing

using contextual word embeddings instead. But we don’t have to, because we can

use…

Document Embeddings

We can use pre-trained models to generate embeddings for whole documents

instead of for single words, thus eliminating the need to average word embeddings.

In our case, a document is a sentence:

documents = [Sentence(watch1), Sentence(watch2)]

To actually get the embeddings, we use TransformerDocumentEmbeddings in the

same way as in the other examples:

from flair.embeddings import TransformerDocumentEmbeddings

bert_doc = TransformerDocumentEmbeddings('bert-base-uncased')

bert_doc.embed(documents)

Contextual Word Embeddings | 959

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!