pdfcoffee
Chapter 34. Use the feature_column module of TensorFlow to define numeric featuresof size 28×28:feature_columns = [tf.feature_column.numeric_column("x",shape=[28, 28])]5. Create the logistic regression estimator. We use a simple LinearClassifier.We encourage you to experiment with DNNClassifier as well:classifier = tf.estimator.LinearClassifier(feature_columns=feature_columns,n_classes=10,model_dir="mnist_model/")6. Let us also build an input_function to feed the estimator:train_input_fn = tf.compat.v1.estimator.inputs.numpy_input_fn(x={"x": train_data},y=train_labels,batch_size=100,num_epochs=None,shuffle=True)7. Let's now train the classifier:classifier.train(input_fn=train_input_fn, steps=10)8. Next, we create the input function for validation data:val_input_fn = tf.compat.v1.estimator.inputs.numpy_input_fn(x={"x": eval_data},y=eval_labels,num_epochs=1,shuffle=False)9. Let us evaluate the trained Linear Classifier on the validation data:eval_results = classifier.evaluate(input_fn=val_input_fn)print(eval_results)10. We get an accuracy of 89.4% after 130 time steps. Not bad, right? Please notethat since we have specified the time steps, the model trains for the specifiedsteps and logs the value after 10 steps (the number of steps specified). Nowif we run train again, then it will start from the state it had at the 10th timestep. The steps will go on increasing with an increment of the number ofsteps mentioned.[ 105 ]
RegressionThe following is the graph of the preceding model:Figure 3: TensorBoard graph of the generated modelFrom TensorBoard we can also visualize the change in accuracy and average loss asthe linear classifier learned in steps of ten:Figure 4: Accuracy and average loss, visualized[ 106 ]
- Page 90 and 91: Chapter 23. Placeholders: Placehold
- Page 92 and 93: • To create random values from a
- Page 94 and 95: To know the value, we need to creat
- Page 96 and 97: Chapter 2Both PyTorch and TensorFlo
- Page 98 and 99: Chapter 2state = [tf.zeros([100, 10
- Page 100 and 101: Chapter 2For now, there's no need t
- Page 102 and 103: Chapter 2Let's see an example of a
- Page 104 and 105: Chapter 2If you want to save a mode
- Page 106 and 107: Chapter 2supervised=True)train_data
- Page 108 and 109: Chapter 2There, tf.feature_column.n
- Page 110 and 111: Chapter 2print (dz_dx)print (dy_dx)
- Page 112 and 113: Chapter 2In our toy example we use
- Page 114 and 115: Chapter 2For multi-machine training
- Page 116 and 117: Chapter 25. Use tf.layers modules t
- Page 118 and 119: Chapter 2Keras or tf.keras?Another
- Page 120: • tf.data can be used to load mod
- Page 123 and 124: RegressionLet us imagine a simpler
- Page 125 and 126: RegressionTake a look at the last t
- Page 127 and 128: Regression3. Now, we calculate the
- Page 129 and 130: RegressionIn the next section we wi
- Page 131 and 132: Regression2. Now, we define the fea
- Page 133 and 134: Regression2. Download the dataset:(
- Page 135 and 136: RegressionThe following is the Tens
- Page 137 and 138: RegressionIn regression the aim is
- Page 139: RegressionThe Estimator outputs the
- Page 143 and 144: RegressionReferencesHere are some g
- Page 145 and 146: Convolutional Neural NetworksIn thi
- Page 147 and 148: Convolutional Neural NetworksIn thi
- Page 149 and 150: Convolutional Neural NetworksIn oth
- Page 151 and 152: Convolutional Neural NetworksThen w
- Page 153 and 154: Convolutional Neural NetworksHoweve
- Page 155 and 156: Convolutional Neural NetworksPlotti
- Page 157 and 158: Convolutional Neural NetworksIn gen
- Page 159 and 160: Convolutional Neural NetworksOur ne
- Page 161 and 162: Convolutional Neural NetworksThese
- Page 163 and 164: Convolutional Neural NetworksSo, we
- Page 165 and 166: Convolutional Neural NetworksEach i
- Page 167 and 168: Convolutional Neural NetworksVery d
- Page 169 and 170: Convolutional Neural NetworksRecogn
- Page 171 and 172: Convolutional Neural NetworksIf we
- Page 173 and 174: Convolutional Neural NetworksRefere
- Page 175 and 176: Advanced Convolutional Neural Netwo
- Page 177 and 178: Advanced Convolutional Neural Netwo
- Page 179 and 180: Advanced Convolutional Neural Netwo
- Page 181 and 182: Advanced Convolutional Neural Netwo
- Page 183 and 184: Advanced Convolutional Neural Netwo
- Page 185 and 186: Advanced Convolutional Neural Netwo
- Page 187 and 188: Advanced Convolutional Neural Netwo
- Page 189 and 190: Advanced Convolutional Neural Netwo
Chapter 3
4. Use the feature_column module of TensorFlow to define numeric features
of size 28×28:
feature_columns = [tf.feature_column.numeric_column("x",
shape=[28, 28])]
5. Create the logistic regression estimator. We use a simple LinearClassifier.
We encourage you to experiment with DNNClassifier as well:
classifier = tf.estimator.LinearClassifier(
feature_columns=feature_columns,
n_classes=10,
model_dir="mnist_model/"
)
6. Let us also build an input_function to feed the estimator:
train_input_fn = tf.compat.v1.estimator.inputs.numpy_input_fn(
x={"x": train_data},
y=train_labels,
batch_size=100,
num_epochs=None,
shuffle=True)
7. Let's now train the classifier:
classifier.train(input_fn=train_input_fn, steps=10)
8. Next, we create the input function for validation data:
val_input_fn = tf.compat.v1.estimator.inputs.numpy_input_fn(
x={"x": eval_data},
y=eval_labels,
num_epochs=1,
shuffle=False)
9. Let us evaluate the trained Linear Classifier on the validation data:
eval_results = classifier.evaluate(input_fn=val_input_fn)
print(eval_results)
10. We get an accuracy of 89.4% after 130 time steps. Not bad, right? Please note
that since we have specified the time steps, the model trains for the specified
steps and logs the value after 10 steps (the number of steps specified). Now
if we run train again, then it will start from the state it had at the 10th time
step. The steps will go on increasing with an increment of the number of
steps mentioned.
[ 105 ]