pdfcoffee

soumyasankar99
from soumyasankar99 More from this publisher
09.05.2023 Views

accuracy: 0.9962 - val_loss: 0.7770 - val_accuracy: 0.7500Epoch 10/1029/29 [==============================] - 3s 99ms/step - loss: 0.0062 -accuracy: 0.9988 - val_loss: 0.8344 - val_accuracy: 0.7450Chapter 8Figure 5: Accuracy and loss plots from TensorBoard for sentiment analysis network trainingOur checkpoint callback has saved the best model based on the lowest value ofvalidation loss, and we can now reload this for evaluation against our held out testset:best_model = SentimentAnalysisModel(vocab_size+1, max_seqlen)best_model.build(input_shape=(batch_size, max_seqlen))best_model.load_weights(best_model_file)best_model.compile(loss="binary_crossentropy",optimizer="adam",metrics=["accuracy"])The easiest high-level way to evaluate a model against a dataset is to use the model.evaluate() call:test_loss, test_acc = best_model.evaluate(test_dataset)print("test loss: {:.3f}, test accuracy: {:.3f}".format(test_loss, test_acc))This gives us the following output:test loss: 0.487, test accuracy: 0.782[ 305 ]

Recurrent Neural NetworksWe can also use model.predict() to retrieve our predictions and compare themindividually to the labels and use external tools (from scikit-learn, for example) tocompute our results:labels, predictions = [], []idx2word[0] = "PAD"is_first_batch = Truefor test_batch in test_dataset:inputs_b, labels_b = test_batchpred_batch = best_model.predict(inputs_b)predictions.extend([(1 if p > 0.5 else 0) for p in pred_batch])labels.extend([l for l in labels_b])if is_first_batch:# print first batch of label, prediction, and sentencefor rid in range(inputs_b.shape[0]):words = [idx2word[idx] for idx in inputs_b[rid].numpy()]words = [w for w in words if w != "PAD"]sentence = " ".join(words)print("{:d}\t{:d}\t{:s}".format(labels[rid], predictions[rid], sentence))is_first_batch = Falseprint("accuracy score: {:.3f}".format(accuracy_score(labels,predictions)))print("confusion matrix")print(confusion_matrix(labels, predictions)For the first batch of 64 sentences in our test dataset, we reconstruct the sentenceand display the label (first column) as well as the prediction from the model(second column). Here we show the top 10 sentences. As you can see, the modelgets it right for most sentences in this list:LBL PRED SENT1 1 one of my favorite purchases ever1 1 works great1 1 our waiter was very attentive friendly and informative0 0 defective crap0 1 and it was way to expensive0 0 don't waste your money0 0 friend's pasta also bad he barely touched it1 1 it's a sad movie but very good0 0 we recently witnessed her poor quality of management towardsother guests as well[ 306 ]

Recurrent Neural Networks

We can also use model.predict() to retrieve our predictions and compare them

individually to the labels and use external tools (from scikit-learn, for example) to

compute our results:

labels, predictions = [], []

idx2word[0] = "PAD"

is_first_batch = True

for test_batch in test_dataset:

inputs_b, labels_b = test_batch

pred_batch = best_model.predict(inputs_b)

predictions.extend([(1 if p > 0.5 else 0) for p in pred_batch])

labels.extend([l for l in labels_b])

if is_first_batch:

# print first batch of label, prediction, and sentence

for rid in range(inputs_b.shape[0]):

words = [idx2word[idx] for idx in inputs_b[rid].numpy()]

words = [w for w in words if w != "PAD"]

sentence = " ".join(words)

print("{:d}\t{:d}\t{:s}".format(

labels[rid], predictions[rid], sentence))

is_first_batch = False

print("accuracy score: {:.3f}".format(accuracy_score(labels,

predictions)))

print("confusion matrix")

print(confusion_matrix(labels, predictions)

For the first batch of 64 sentences in our test dataset, we reconstruct the sentence

and display the label (first column) as well as the prediction from the model

(second column). Here we show the top 10 sentences. As you can see, the model

gets it right for most sentences in this list:

LBL PRED SENT

1 1 one of my favorite purchases ever

1 1 works great

1 1 our waiter was very attentive friendly and informative

0 0 defective crap

0 1 and it was way to expensive

0 0 don't waste your money

0 0 friend's pasta also bad he barely touched it

1 1 it's a sad movie but very good

0 0 we recently witnessed her poor quality of management towards

other guests as well

[ 306 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!