pdfcoffee

soumyasankar99
from soumyasankar99 More from this publisher
09.05.2023 Views

Chapter 34. Use the feature_column module of TensorFlow to define numeric featuresof size 28×28:feature_columns = [tf.feature_column.numeric_column("x",shape=[28, 28])]5. Create the logistic regression estimator. We use a simple LinearClassifier.We encourage you to experiment with DNNClassifier as well:classifier = tf.estimator.LinearClassifier(feature_columns=feature_columns,n_classes=10,model_dir="mnist_model/")6. Let us also build an input_function to feed the estimator:train_input_fn = tf.compat.v1.estimator.inputs.numpy_input_fn(x={"x": train_data},y=train_labels,batch_size=100,num_epochs=None,shuffle=True)7. Let's now train the classifier:classifier.train(input_fn=train_input_fn, steps=10)8. Next, we create the input function for validation data:val_input_fn = tf.compat.v1.estimator.inputs.numpy_input_fn(x={"x": eval_data},y=eval_labels,num_epochs=1,shuffle=False)9. Let us evaluate the trained Linear Classifier on the validation data:eval_results = classifier.evaluate(input_fn=val_input_fn)print(eval_results)10. We get an accuracy of 89.4% after 130 time steps. Not bad, right? Please notethat since we have specified the time steps, the model trains for the specifiedsteps and logs the value after 10 steps (the number of steps specified). Nowif we run train again, then it will start from the state it had at the 10th timestep. The steps will go on increasing with an increment of the number ofsteps mentioned.[ 105 ]

RegressionThe following is the graph of the preceding model:Figure 3: TensorBoard graph of the generated modelFrom TensorBoard we can also visualize the change in accuracy and average loss asthe linear classifier learned in steps of ten:Figure 4: Accuracy and average loss, visualized[ 106 ]

Chapter 3

4. Use the feature_column module of TensorFlow to define numeric features

of size 28×28:

feature_columns = [tf.feature_column.numeric_column("x",

shape=[28, 28])]

5. Create the logistic regression estimator. We use a simple LinearClassifier.

We encourage you to experiment with DNNClassifier as well:

classifier = tf.estimator.LinearClassifier(

feature_columns=feature_columns,

n_classes=10,

model_dir="mnist_model/"

)

6. Let us also build an input_function to feed the estimator:

train_input_fn = tf.compat.v1.estimator.inputs.numpy_input_fn(

x={"x": train_data},

y=train_labels,

batch_size=100,

num_epochs=None,

shuffle=True)

7. Let's now train the classifier:

classifier.train(input_fn=train_input_fn, steps=10)

8. Next, we create the input function for validation data:

val_input_fn = tf.compat.v1.estimator.inputs.numpy_input_fn(

x={"x": eval_data},

y=eval_labels,

num_epochs=1,

shuffle=False)

9. Let us evaluate the trained Linear Classifier on the validation data:

eval_results = classifier.evaluate(input_fn=val_input_fn)

print(eval_results)

10. We get an accuracy of 89.4% after 130 time steps. Not bad, right? Please note

that since we have specified the time steps, the model trains for the specified

steps and logs the value after 10 steps (the number of steps specified). Now

if we run train again, then it will start from the state it had at the 10th time

step. The steps will go on increasing with an increment of the number of

steps mentioned.

[ 105 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!