pdfcoffee
accuracy: 0.9962 - val_loss: 0.7770 - val_accuracy: 0.7500Epoch 10/1029/29 [==============================] - 3s 99ms/step - loss: 0.0062 -accuracy: 0.9988 - val_loss: 0.8344 - val_accuracy: 0.7450Chapter 8Figure 5: Accuracy and loss plots from TensorBoard for sentiment analysis network trainingOur checkpoint callback has saved the best model based on the lowest value ofvalidation loss, and we can now reload this for evaluation against our held out testset:best_model = SentimentAnalysisModel(vocab_size+1, max_seqlen)best_model.build(input_shape=(batch_size, max_seqlen))best_model.load_weights(best_model_file)best_model.compile(loss="binary_crossentropy",optimizer="adam",metrics=["accuracy"])The easiest high-level way to evaluate a model against a dataset is to use the model.evaluate() call:test_loss, test_acc = best_model.evaluate(test_dataset)print("test loss: {:.3f}, test accuracy: {:.3f}".format(test_loss, test_acc))This gives us the following output:test loss: 0.487, test accuracy: 0.782[ 305 ]
Recurrent Neural NetworksWe can also use model.predict() to retrieve our predictions and compare themindividually to the labels and use external tools (from scikit-learn, for example) tocompute our results:labels, predictions = [], []idx2word[0] = "PAD"is_first_batch = Truefor test_batch in test_dataset:inputs_b, labels_b = test_batchpred_batch = best_model.predict(inputs_b)predictions.extend([(1 if p > 0.5 else 0) for p in pred_batch])labels.extend([l for l in labels_b])if is_first_batch:# print first batch of label, prediction, and sentencefor rid in range(inputs_b.shape[0]):words = [idx2word[idx] for idx in inputs_b[rid].numpy()]words = [w for w in words if w != "PAD"]sentence = " ".join(words)print("{:d}\t{:d}\t{:s}".format(labels[rid], predictions[rid], sentence))is_first_batch = Falseprint("accuracy score: {:.3f}".format(accuracy_score(labels,predictions)))print("confusion matrix")print(confusion_matrix(labels, predictions)For the first batch of 64 sentences in our test dataset, we reconstruct the sentenceand display the label (first column) as well as the prediction from the model(second column). Here we show the top 10 sentences. As you can see, the modelgets it right for most sentences in this list:LBL PRED SENT1 1 one of my favorite purchases ever1 1 works great1 1 our waiter was very attentive friendly and informative0 0 defective crap0 1 and it was way to expensive0 0 don't waste your money0 0 friend's pasta also bad he barely touched it1 1 it's a sad movie but very good0 0 we recently witnessed her poor quality of management towardsother guests as well[ 306 ]
- Page 289 and 290: Word EmbeddingsThe dataset is a 114
- Page 291 and 292: Word Embeddingsprint("random walks
- Page 293 and 294: Word Embeddingssize=128, # size of
- Page 295 and 296: Word EmbeddingsfastText computes em
- Page 297 and 298: Word EmbeddingsIn the future, once
- Page 299 and 300: Word EmbeddingsA much earlier relat
- Page 301 and 302: Word EmbeddingsOnce you have the fi
- Page 303 and 304: Word EmbeddingsThis will create the
- Page 305 and 306: Word EmbeddingsClassifying with BER
- Page 307 and 308: Word Embeddings2. Each Transformer
- Page 309 and 310: Word EmbeddingsOnce trained, we sav
- Page 311 and 312: Word Embeddings4. Pennington, J., S
- Page 313 and 314: Word Embeddings34. Google Research,
- Page 315 and 316: Recurrent Neural NetworksWe will th
- Page 317 and 318: Recurrent Neural NetworksFor notati
- Page 319 and 320: Recurrent Neural NetworksThis probl
- Page 321 and 322: Recurrent Neural NetworksThe line a
- Page 323 and 324: Recurrent Neural NetworksGated recu
- Page 325 and 326: Recurrent Neural NetworksThis probl
- Page 327 and 328: Recurrent Neural NetworksThe topolo
- Page 329 and 330: Recurrent Neural Networkstexts = do
- Page 331 and 332: Recurrent Neural Networksdef call(s
- Page 333 and 334: Recurrent Neural Networks# callback
- Page 335 and 336: Recurrent Neural NetworksExample
- Page 337 and 338: Recurrent Neural NetworksAs can be
- Page 339: Recurrent Neural Networksdata_dir =
- Page 343 and 344: Recurrent Neural NetworksIn order t
- Page 345 and 346: Recurrent Neural Networkssource_voc
- Page 347 and 348: Recurrent Neural NetworksFinally, w
- Page 349 and 350: Recurrent Neural Networks38 - val_l
- Page 351 and 352: Recurrent Neural NetworksIf you wou
- Page 353 and 354: Recurrent Neural NetworksExample
- Page 355 and 356: Recurrent Neural NetworksNext we ha
- Page 357 and 358: Recurrent Neural Networksself.embed
- Page 359 and 360: Recurrent Neural NetworksThis is a
- Page 361 and 362: Recurrent Neural Networksreturn np.
- Page 363 and 364: Recurrent Neural NetworksAttention
- Page 365 and 366: Recurrent Neural NetworksFinally, V
- Page 367 and 368: Recurrent Neural Networks# query.sh
- Page 369 and 370: Recurrent Neural Networksself.atten
- Page 371 and 372: Recurrent Neural Networks30 try to
- Page 373 and 374: Recurrent Neural Networks3. Because
- Page 375 and 376: Recurrent Neural NetworksSummaryIn
- Page 377 and 378: Recurrent Neural Networks18. Shi, X
- Page 380 and 381: AutoencodersAutoencoders are feed-f
- Page 382 and 383: Depending upon the actual dimension
- Page 384 and 385: • __init__(): Here, you define al
- Page 386 and 387: Chapter 9And then we reshape the te
- Page 388 and 389: Chapter 9plt.imshow(x_test[index].r
Recurrent Neural Networks
We can also use model.predict() to retrieve our predictions and compare them
individually to the labels and use external tools (from scikit-learn, for example) to
compute our results:
labels, predictions = [], []
idx2word[0] = "PAD"
is_first_batch = True
for test_batch in test_dataset:
inputs_b, labels_b = test_batch
pred_batch = best_model.predict(inputs_b)
predictions.extend([(1 if p > 0.5 else 0) for p in pred_batch])
labels.extend([l for l in labels_b])
if is_first_batch:
# print first batch of label, prediction, and sentence
for rid in range(inputs_b.shape[0]):
words = [idx2word[idx] for idx in inputs_b[rid].numpy()]
words = [w for w in words if w != "PAD"]
sentence = " ".join(words)
print("{:d}\t{:d}\t{:s}".format(
labels[rid], predictions[rid], sentence))
is_first_batch = False
print("accuracy score: {:.3f}".format(accuracy_score(labels,
predictions)))
print("confusion matrix")
print(confusion_matrix(labels, predictions)
For the first batch of 64 sentences in our test dataset, we reconstruct the sentence
and display the label (first column) as well as the prediction from the model
(second column). Here we show the top 10 sentences. As you can see, the model
gets it right for most sentences in this list:
LBL PRED SENT
1 1 one of my favorite purchases ever
1 1 works great
1 1 our waiter was very attentive friendly and informative
0 0 defective crap
0 1 and it was way to expensive
0 0 don't waste your money
0 0 friend's pasta also bad he barely touched it
1 1 it's a sad movie but very good
0 0 we recently witnessed her poor quality of management towards
other guests as well
[ 306 ]