pdfcoffee
Chapter 7Word embeddings are based on the distributional hypothesis, which states thatwords that occur in similar contexts tend to have similar meanings. Hence the classof word embedding-based encodings are also known as distributed representations,which we will talk about next.Distributed representationsDistributed representations attempt to capture the meaning of a word byconsidering its relations with other words in its context. The idea behind thedistributed hypothesis is captured in this quote from J. R. Firth, a linguist whofirst proposed this idea:"You shall know a word by the company it keeps."How does this work? By way of example, consider the following pair of sentences:Paris is the capital of France.Berlin is the capital of Germany.Even assuming no knowledge of world geography, the sentence pair implies somesort of relationship between the entities Paris, France, Berlin, and Germany thatcould be represented as:"Paris" is to "France" as "Berlin" is to "Germany"Distributed representations are based on the idea that there exists sometransformation φφ such that:φφ("Paris") − φφ("France") ≈ φφ("Berlin") − φφ("GGGGGGGGGGGGGG")In other words, a distributed embedding space is one where words that are usedin similar contexts are close to one another. Therefore, similarity between the wordvectors in this space would roughly correspond to the semantic similarity betweenthe words.Figure 1 shows a TensorBoard visualization of word embedding of words around theword "important" in the embedding space. As you can see, the neighbors of the wordtend to be closely related, or interchangeable with the original word.[ 233 ]
Word EmbeddingsFor example, "crucial" is virtually a synonym, and it is easy to see how the words"historical" or "valuable" could be substituted in certain situations:Figure 1: Visualization of nearest neighbors of the word "important" in a word embedding dataset,from the TensorFlow Embedding Guide (https://www.tensorflow.org/guide/embedding)In the next section we will look at various types of distributed representations(or word embeddings).Static embeddingsStatic embeddings are the oldest type of word embedding. The embeddings aregenerated against a large corpus but the number of words, though large, is finite.You can think of a static embedding as a dictionary, with words as the keys andtheir corresponding vector as the value. If you have a word whose embeddingneeds to be looked up that was not in the original corpus, then you are out of luck.In addition, a word has the same embedding regardless of how it is used, so staticembeddings cannot address the problem of polysemy, that is, words with multiplemeanings. We will explore this issue further when we cover non-static embeddingslater in this chapter.[ 234 ]
- Page 217 and 218: Advanced Convolutional Neural Netwo
- Page 219 and 220: Advanced Convolutional Neural Netwo
- Page 221 and 222: Advanced Convolutional Neural Netwo
- Page 223 and 224: Advanced Convolutional Neural Netwo
- Page 226 and 227: GenerativeAdversarial NetworksIn th
- Page 228 and 229: [ 193 ]Chapter 6Eventually, we reac
- Page 230 and 231: [ 195 ]Chapter 6Next, we combine th
- Page 232 and 233: Chapter 6And handwritten digits gen
- Page 234 and 235: Chapter 6Figure 1: Visualizing the
- Page 236 and 237: Chapter 6The resultant generator mo
- Page 238 and 239: Chapter 6Figure 4: A summary of res
- Page 240 and 241: Chapter 6def train(self, epochs, ba
- Page 242 and 243: Chapter 6The preceding images were
- Page 244 and 245: Chapter 6Another interesting paper
- Page 246 and 247: Chapter 6To elaborate, let us say t
- Page 248 and 249: Chapter 6Figure 7: The architecture
- Page 250 and 251: Chapter 6Figure 11: Illegible initi
- Page 252 and 253: Chapter 6Bedrooms: Generated bedroo
- Page 254 and 255: Chapter 6The images need to be norm
- Page 256 and 257: Chapter 6initializer = tf.random_no
- Page 258 and 259: Cool, right? Now we can define the
- Page 260 and 261: Chapter 6d_loss = (dA_loss + dB_los
- Page 262 and 263: Chapter 6generator_AB.save_weights(
- Page 264: 6. Ledig, Christian, et al. Photo-R
- Page 267: Word EmbeddingsDeep learning models
- Page 271 and 272: Word EmbeddingsAssuming a window si
- Page 273 and 274: Word EmbeddingsGloVeThe Global vect
- Page 275 and 276: Word Embeddingsgensim is an open so
- Page 277 and 278: Word Embeddingsgensim also provides
- Page 279 and 280: Word EmbeddingsSpecifically, we wil
- Page 281 and 282: Word EmbeddingsWe will also convert
- Page 283 and 284: Word EmbeddingsE = np.zeros((vocab_
- Page 285 and 286: Word Embeddingsx = self.embedding(x
- Page 287 and 288: Word EmbeddingsThe change in valida
- Page 289 and 290: Word EmbeddingsThe dataset is a 114
- Page 291 and 292: Word Embeddingsprint("random walks
- Page 293 and 294: Word Embeddingssize=128, # size of
- Page 295 and 296: Word EmbeddingsfastText computes em
- Page 297 and 298: Word EmbeddingsIn the future, once
- Page 299 and 300: Word EmbeddingsA much earlier relat
- Page 301 and 302: Word EmbeddingsOnce you have the fi
- Page 303 and 304: Word EmbeddingsThis will create the
- Page 305 and 306: Word EmbeddingsClassifying with BER
- Page 307 and 308: Word Embeddings2. Each Transformer
- Page 309 and 310: Word EmbeddingsOnce trained, we sav
- Page 311 and 312: Word Embeddings4. Pennington, J., S
- Page 313 and 314: Word Embeddings34. Google Research,
- Page 315 and 316: Recurrent Neural NetworksWe will th
- Page 317 and 318: Recurrent Neural NetworksFor notati
Chapter 7
Word embeddings are based on the distributional hypothesis, which states that
words that occur in similar contexts tend to have similar meanings. Hence the class
of word embedding-based encodings are also known as distributed representations,
which we will talk about next.
Distributed representations
Distributed representations attempt to capture the meaning of a word by
considering its relations with other words in its context. The idea behind the
distributed hypothesis is captured in this quote from J. R. Firth, a linguist who
first proposed this idea:
"You shall know a word by the company it keeps."
How does this work? By way of example, consider the following pair of sentences:
Paris is the capital of France.
Berlin is the capital of Germany.
Even assuming no knowledge of world geography, the sentence pair implies some
sort of relationship between the entities Paris, France, Berlin, and Germany that
could be represented as:
"Paris" is to "France" as "Berlin" is to "Germany"
Distributed representations are based on the idea that there exists some
transformation φφ such that:
φφ("Paris") − φφ("France") ≈ φφ("Berlin") − φφ("GGGGGGGGGGGGGG")
In other words, a distributed embedding space is one where words that are used
in similar contexts are close to one another. Therefore, similarity between the word
vectors in this space would roughly correspond to the semantic similarity between
the words.
Figure 1 shows a TensorBoard visualization of word embedding of words around the
word "important" in the embedding space. As you can see, the neighbors of the word
tend to be closely related, or interchangeable with the original word.
[ 233 ]