pdfcoffee
Chapter 2Let's see an example of a custom layer that simply multiplies an input by a matrixnamed kernel (for the sake of simplicity, the import lines are skipped in this text butare of course used in GitHub code):class MyLayer(layers.Layer):def __init__(self, output_dim, **kwargs):self.output_dim = output_dimsuper(MyLayer, self).__init__(**kwargs)def build(self, input_shape):# Create a trainable weight variable for this layer.self.kernel = self.add_weight(name='kernel',shape=(input_shape[1], self.output_dim),initializer='uniform',trainable=True)def call(self, inputs):# Do the multiplication and returnreturn tf.matmul(inputs, self.kernel)Once the MyLayer() custom brick is defined, it can be composed just like any otherbrick, as in this following example, where a Sequential model is defined by stackingMyLayer with a softmax activation function:model = tf.keras.Sequential([MyLayer(20),layers.Activation('softmax')])So, in short, you can use Model subclassing if you are in the business of buildingbricks.In this section we have seen that tf.keras offers a higher API level, with threedifferent programming models: Sequential API, Functional API, and Modelsubclassing. Now let's move our attention to callbacks, a different feature, whichis useful during training with tf.keras.CallbacksCallbacks are objects passed to a model to extend or modify behaviors duringtraining. There are a few useful callbacks that are commonly used in tf.keras:• tf.keras.callbacks.ModelCheckpoint: This feature is used to savecheckpoints of your model at regular intervals and recover in case ofproblems.[ 67 ]
TensorFlow 1.x and 2.x• tf.keras.callbacks.LearningRateScheduler: This feature is usedto dynamically change the learning rate during optimization.• tf.keras.callbacks.EarlyStopping: This feature is used to interrupttraining when validation performance has stopped improving after a while.• tf.keras.callbacks.TensorBoard: This feature is used to monitor themodel's behavior using TensorBoard.For example, we have already used TensorBoard as in this example:callbacks = [# Write TensorBoard logs to './logs' directorytf.keras.callbacks.TensorBoard(log_dir='./logs')]model.fit(data, labels, batch_size=256, epochs=100,callbacks=callbacks,validation_data=(val_data, val_labels))Saving a model and weightsAfter training a model, it can be useful to save the weights in a persistent way. Thisis easily achieved with the following code fragment, which saves to TensorFlow'sinternal format:# Save weights to a Tensorflow Checkpoint filemodel.save_weights('./weights/my_model')If you want to save in Keras's format, which is portable across multiple backends,then use:# Save weights to a HDF5 filemodel.save_weights('my_model.h5', save_format='h5')Weights are easily loaded with:# Restore the model's statemodel.load_weights(file_path)In addition to weights, a model can be serialized in JSON with:json_string = model.to_json() # savemodel = tf.keras.models.model_from_json(json_string) # restoreIf you prefer, a model can be serialized in YAML with:yaml_string = model.to_yaml() # savemodel = tf.keras.models.model_from_yaml(yaml_string) # restore[ 68 ]
- Page 51 and 52: Neural Network Foundations with Ten
- Page 53 and 54: Neural Network Foundations with Ten
- Page 55 and 56: Neural Network Foundations with Ten
- Page 57 and 58: Neural Network Foundations with Ten
- Page 59 and 60: Neural Network Foundations with Ten
- Page 61 and 62: Neural Network Foundations with Ten
- Page 63 and 64: Neural Network Foundations with Ten
- Page 65 and 66: Neural Network Foundations with Ten
- Page 67 and 68: Neural Network Foundations with Ten
- Page 69 and 70: Neural Network Foundations with Ten
- Page 71 and 72: Neural Network Foundations with Ten
- Page 73 and 74: Neural Network Foundations with Ten
- Page 75 and 76: Neural Network Foundations with Ten
- Page 77 and 78: Neural Network Foundations with Ten
- Page 79 and 80: Neural Network Foundations with Ten
- Page 81 and 82: Neural Network Foundations with Ten
- Page 83 and 84: Neural Network Foundations with Ten
- Page 86 and 87: TensorFlow 1.x and 2.xThe intent of
- Page 88 and 89: An example to start withWe'll consi
- Page 90 and 91: Chapter 23. Placeholders: Placehold
- Page 92 and 93: • To create random values from a
- Page 94 and 95: To know the value, we need to creat
- Page 96 and 97: Chapter 2Both PyTorch and TensorFlo
- Page 98 and 99: Chapter 2state = [tf.zeros([100, 10
- Page 100 and 101: Chapter 2For now, there's no need t
- Page 104 and 105: Chapter 2If you want to save a mode
- Page 106 and 107: Chapter 2supervised=True)train_data
- Page 108 and 109: Chapter 2There, tf.feature_column.n
- Page 110 and 111: Chapter 2print (dz_dx)print (dy_dx)
- Page 112 and 113: Chapter 2In our toy example we use
- Page 114 and 115: Chapter 2For multi-machine training
- Page 116 and 117: Chapter 25. Use tf.layers modules t
- Page 118 and 119: Chapter 2Keras or tf.keras?Another
- Page 120: • tf.data can be used to load mod
- Page 123 and 124: RegressionLet us imagine a simpler
- Page 125 and 126: RegressionTake a look at the last t
- Page 127 and 128: Regression3. Now, we calculate the
- Page 129 and 130: RegressionIn the next section we wi
- Page 131 and 132: Regression2. Now, we define the fea
- Page 133 and 134: Regression2. Download the dataset:(
- Page 135 and 136: RegressionThe following is the Tens
- Page 137 and 138: RegressionIn regression the aim is
- Page 139 and 140: RegressionThe Estimator outputs the
- Page 141 and 142: RegressionThe following is the grap
- Page 143 and 144: RegressionReferencesHere are some g
- Page 145 and 146: Convolutional Neural NetworksIn thi
- Page 147 and 148: Convolutional Neural NetworksIn thi
- Page 149 and 150: Convolutional Neural NetworksIn oth
- Page 151 and 152: Convolutional Neural NetworksThen w
Chapter 2
Let's see an example of a custom layer that simply multiplies an input by a matrix
named kernel (for the sake of simplicity, the import lines are skipped in this text but
are of course used in GitHub code):
class MyLayer(layers.Layer):
def __init__(self, output_dim, **kwargs):
self.output_dim = output_dim
super(MyLayer, self).__init__(**kwargs)
def build(self, input_shape):
# Create a trainable weight variable for this layer.
self.kernel = self.add_weight(name='kernel',
shape=(input_shape[1], self.output_dim),
initializer='uniform',
trainable=True)
def call(self, inputs):
# Do the multiplication and return
return tf.matmul(inputs, self.kernel)
Once the MyLayer() custom brick is defined, it can be composed just like any other
brick, as in this following example, where a Sequential model is defined by stacking
MyLayer with a softmax activation function:
model = tf.keras.Sequential([
MyLayer(20),
layers.Activation('softmax')])
So, in short, you can use Model subclassing if you are in the business of building
bricks.
In this section we have seen that tf.keras offers a higher API level, with three
different programming models: Sequential API, Functional API, and Model
subclassing. Now let's move our attention to callbacks, a different feature, which
is useful during training with tf.keras.
Callbacks
Callbacks are objects passed to a model to extend or modify behaviors during
training. There are a few useful callbacks that are commonly used in tf.keras:
• tf.keras.callbacks.ModelCheckpoint: This feature is used to save
checkpoints of your model at regular intervals and recover in case of
problems.
[ 67 ]