pdfcoffee
Chapter 4X_train, X_test = X_train / 255.0, X_test / 255.0# castX_train = X_train.astype('float32')X_test = X_test.astype('float32')# convert class vectors to binary class matricesy_train = tf.keras.utils.to_categorical(y_train, NB_CLASSES)y_test = tf.keras.utils.to_categorical(y_test, NB_CLASSES)# initialize the optimizer and modelmodel = build(input_shape=INPUT_SHAPE, classes=NB_CLASSES)model.compile(loss="categorical_crossentropy", optimizer=OPTIMIZER,metrics=["accuracy"])model.summary()# use TensorBoard, princess Aurora!callbacks = [# Write TensorBoard logs to './logs' directorytf.keras.callbacks.TensorBoard(log_dir='./logs')]# fithistory = model.fit(X_train, y_train,batch_size=BATCH_SIZE, epochs=EPOCHS,verbose=VERBOSE, validation_split=VALIDATION_SPLIT,callbacks=callbacks)score = model.evaluate(X_test, y_test, verbose=VERBOSE)print("\nTest score:", score[0])print('Test accuracy:', score[1])Now let's run the code. As you can see in Figure 6, the time had a significant increaseand each iteration in our DNN now takes ~28 seconds against ~1-2 seconds for thenet defined in Chapter 1, Neural Network Foundations with TensorFlow 2.0.[ 117 ]
Convolutional Neural NetworksHowever, the accuracy reached a new peak at 99.991 on training, 99.91 on validation,and 99.15 on test%!Figure 6: LeNet accuracyLet's see the execution of a full run for 20 epochs:[ 118 ]
- Page 102 and 103: Chapter 2Let's see an example of a
- Page 104 and 105: Chapter 2If you want to save a mode
- Page 106 and 107: Chapter 2supervised=True)train_data
- Page 108 and 109: Chapter 2There, tf.feature_column.n
- Page 110 and 111: Chapter 2print (dz_dx)print (dy_dx)
- Page 112 and 113: Chapter 2In our toy example we use
- Page 114 and 115: Chapter 2For multi-machine training
- Page 116 and 117: Chapter 25. Use tf.layers modules t
- Page 118 and 119: Chapter 2Keras or tf.keras?Another
- Page 120: • tf.data can be used to load mod
- Page 123 and 124: RegressionLet us imagine a simpler
- Page 125 and 126: RegressionTake a look at the last t
- Page 127 and 128: Regression3. Now, we calculate the
- Page 129 and 130: RegressionIn the next section we wi
- Page 131 and 132: Regression2. Now, we define the fea
- Page 133 and 134: Regression2. Download the dataset:(
- Page 135 and 136: RegressionThe following is the Tens
- Page 137 and 138: RegressionIn regression the aim is
- Page 139 and 140: RegressionThe Estimator outputs the
- Page 141 and 142: RegressionThe following is the grap
- Page 143 and 144: RegressionReferencesHere are some g
- Page 145 and 146: Convolutional Neural NetworksIn thi
- Page 147 and 148: Convolutional Neural NetworksIn thi
- Page 149 and 150: Convolutional Neural NetworksIn oth
- Page 151: Convolutional Neural NetworksThen w
- Page 155 and 156: Convolutional Neural NetworksPlotti
- Page 157 and 158: Convolutional Neural NetworksIn gen
- Page 159 and 160: Convolutional Neural NetworksOur ne
- Page 161 and 162: Convolutional Neural NetworksThese
- Page 163 and 164: Convolutional Neural NetworksSo, we
- Page 165 and 166: Convolutional Neural NetworksEach i
- Page 167 and 168: Convolutional Neural NetworksVery d
- Page 169 and 170: Convolutional Neural NetworksRecogn
- Page 171 and 172: Convolutional Neural NetworksIf we
- Page 173 and 174: Convolutional Neural NetworksRefere
- Page 175 and 176: Advanced Convolutional Neural Netwo
- Page 177 and 178: Advanced Convolutional Neural Netwo
- Page 179 and 180: Advanced Convolutional Neural Netwo
- Page 181 and 182: Advanced Convolutional Neural Netwo
- Page 183 and 184: Advanced Convolutional Neural Netwo
- Page 185 and 186: Advanced Convolutional Neural Netwo
- Page 187 and 188: Advanced Convolutional Neural Netwo
- Page 189 and 190: Advanced Convolutional Neural Netwo
- Page 191 and 192: Advanced Convolutional Neural Netwo
- Page 193 and 194: Advanced Convolutional Neural Netwo
- Page 195 and 196: Advanced Convolutional Neural Netwo
- Page 197 and 198: Advanced Convolutional Neural Netwo
- Page 199 and 200: Advanced Convolutional Neural Netwo
- Page 201 and 202: Advanced Convolutional Neural Netwo
Chapter 4
X_train, X_test = X_train / 255.0, X_test / 255.0
# cast
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
# convert class vectors to binary class matrices
y_train = tf.keras.utils.to_categorical(y_train, NB_CLASSES)
y_test = tf.keras.utils.to_categorical(y_test, NB_CLASSES)
# initialize the optimizer and model
model = build(input_shape=INPUT_SHAPE, classes=NB_CLASSES)
model.compile(loss="categorical_crossentropy", optimizer=OPTIMIZER,
metrics=["accuracy"])
model.summary()
# use TensorBoard, princess Aurora!
callbacks = [
# Write TensorBoard logs to './logs' directory
tf.keras.callbacks.TensorBoard(log_dir='./logs')
]
# fit
history = model.fit(X_train, y_train,
batch_size=BATCH_SIZE, epochs=EPOCHS,
verbose=VERBOSE, validation_split=VALIDATION_SPLIT,
callbacks=callbacks)
score = model.evaluate(X_test, y_test, verbose=VERBOSE)
print("\nTest score:", score[0])
print('Test accuracy:', score[1])
Now let's run the code. As you can see in Figure 6, the time had a significant increase
and each iteration in our DNN now takes ~28 seconds against ~1-2 seconds for the
net defined in Chapter 1, Neural Network Foundations with TensorFlow 2.0.
[ 117 ]