pdfcoffee
Chapter 1model.add(keras.layers.Dense(NB_CLASSES,input_shape=(RESHAPED,),name='dense_layer',activation='softmax'))Once we define the model, we have to compile it so that it can be executed byTensorFlow 2.0. There are a few choices to be made during compilation. Firstly,we need to select an optimizer, which is the specific algorithm used to updateweights while we train our model. Second, we need to select an objective function,which is used by the optimizer to navigate the space of weights (frequently,objective functions are called either loss functions or cost functions and the process ofoptimization is defined as a process of loss minimization). Third, we need to evaluatethe trained model.A complete list of optimizers can be found at https://www.tensorflow.org/api_docs/python/tf/keras/optimizers.Some common choices for objective functions are:• MSE, which defines the mean squared error between the predictions and thetrue values. Mathematically, if d is a vector of predictions and y is the vectornnof n observed values, then MMMMMM = 1 ∑(dd − yy)2nn . Note that this objective functionii=1is the average of all the mistakes made in each prediction. If a prediction isfar off from the true value, then this distance is made more evident by thesquaring operation. In addition, the square can add up the error regardlessof whether a given value is positive or negative.• binary_crossentropy, which defines the binary logarithmic loss. Supposethat our model predicts p while the target is c, then the binary cross-entropyis defined as LL(pp, cc) = −cc ln(pp) − (1 − cc) ln(1 − pp) . Note that this objectivefunction is suitable for binary label prediction.• categorical_crossentropy, which defines the multiclass logarithmicloss. Categorical cross-entropy compares the distribution of the predictionswith the true distribution, with the probability of the true class set to 1 and0 for the other classes. If the true class is c and the prediction is y, then thecategorical cross-entropy is defined as:LL(cc, pp) = − ∑ CC ii ln(pp ii )ii[ 17 ]
Neural Network Foundations with TensorFlow 2.0One way to think about multi-class logarithm loss is to consider the trueclass represented as a one-hot encoded vector, and the closer the model'soutputs are to that vector, the lower the loss. Note that this objective functionis suitable for multi-class label predictions. It is also the default choice inassociation with softmax activation.A complete list of loss functions can be found at https://www.tensorflow.org/api_docs/python/tf/keras/losses.Some common choices for metrics are:• Accuracy, which defines the proportion of correct predictions with respect tothe targets• Precision, which defines how many selected items are relevant for a multilabelclassification• Recall, which defines how many selected items are relevant for a multi-labelclassificationA complete list of metrics can be found at https://www.tensorflow.org/api_docs/python/tf/keras/metrics.Metrics are similar to objective functions, with the only difference that they arenot used for training a model, but only for evaluating the model. However, it isimportant to understand the difference between metrics and objective functions.As discussed, the loss function is used to optimize your network. This is thefunction minimized by the selected optimizer. Instead, a metric is used to judge theperformance of your network. This is only for you to run an evaluation on and itshould be separated from the optimization process. On some occasions, it wouldbe ideal to directly optimize for a specific metric. However, some metrics are notdifferentiable with respect to their inputs, which precludes them from being useddirectly.When compiling a model in TensorFlow 2.0, it is possible to select the optimizer,the loss function, and the metric used together with a given model:# Compiling the model.model.compile(optimizer='SGD',loss='categorical_crossentropy',metrics=['accuracy'])[ 18 ]
- Page 2 and 3: Deep Learning withTensorFlow 2 and
- Page 4 and 5: packt.comSubscribe to our online di
- Page 6 and 7: I want to thank my kids, Aurora, Le
- Page 8 and 9: Sujit Pal is a Technology Research
- Page 10 and 11: Table of ContentsPrefacexiChapter 1
- Page 12 and 13: [ iii ]Table of ContentsConverting
- Page 14 and 15: Table of ContentsSo what is the pro
- Page 16 and 17: [ vii ]Table of ContentsChapter 10:
- Page 18 and 19: Table of ContentsPretrained models
- Page 20 and 21: PrefaceDeep Learning with TensorFlo
- Page 22 and 23: • Supervised learning, in which t
- Page 24 and 25: PrefaceThe complexity of deep learn
- Page 26 and 27: PrefaceFigure 5: Adoption of deep l
- Page 28 and 29: Chapter 1, Neural Network Foundatio
- Page 30 and 31: PrefaceChapter 13, TensorFlow for M
- Page 32 and 33: ConventionsThere are a number of te
- Page 34: PrefaceReferences1. Deep Learning w
- Page 37 and 38: Neural Network Foundations with Ten
- Page 39 and 40: Neural Network Foundations with Ten
- Page 41 and 42: Neural Network Foundations with Ten
- Page 43 and 44: Neural Network Foundations with Ten
- Page 45 and 46: Neural Network Foundations with Ten
- Page 47 and 48: Neural Network Foundations with Ten
- Page 49 and 50: Neural Network Foundations with Ten
- Page 51: Neural Network Foundations with Ten
- Page 55 and 56: Neural Network Foundations with Ten
- Page 57 and 58: Neural Network Foundations with Ten
- Page 59 and 60: Neural Network Foundations with Ten
- Page 61 and 62: Neural Network Foundations with Ten
- Page 63 and 64: Neural Network Foundations with Ten
- Page 65 and 66: Neural Network Foundations with Ten
- Page 67 and 68: Neural Network Foundations with Ten
- Page 69 and 70: Neural Network Foundations with Ten
- Page 71 and 72: Neural Network Foundations with Ten
- Page 73 and 74: Neural Network Foundations with Ten
- Page 75 and 76: Neural Network Foundations with Ten
- Page 77 and 78: Neural Network Foundations with Ten
- Page 79 and 80: Neural Network Foundations with Ten
- Page 81 and 82: Neural Network Foundations with Ten
- Page 83 and 84: Neural Network Foundations with Ten
- Page 86 and 87: TensorFlow 1.x and 2.xThe intent of
- Page 88 and 89: An example to start withWe'll consi
- Page 90 and 91: Chapter 23. Placeholders: Placehold
- Page 92 and 93: • To create random values from a
- Page 94 and 95: To know the value, we need to creat
- Page 96 and 97: Chapter 2Both PyTorch and TensorFlo
- Page 98 and 99: Chapter 2state = [tf.zeros([100, 10
- Page 100 and 101: Chapter 2For now, there's no need t
Neural Network Foundations with TensorFlow 2.0
One way to think about multi-class logarithm loss is to consider the true
class represented as a one-hot encoded vector, and the closer the model's
outputs are to that vector, the lower the loss. Note that this objective function
is suitable for multi-class label predictions. It is also the default choice in
association with softmax activation.
A complete list of loss functions can be found at https://www.
tensorflow.org/api_docs/python/tf/keras/losses.
Some common choices for metrics are:
• Accuracy, which defines the proportion of correct predictions with respect to
the targets
• Precision, which defines how many selected items are relevant for a multilabel
classification
• Recall, which defines how many selected items are relevant for a multi-label
classification
A complete list of metrics can be found at https://www.
tensorflow.org/api_docs/python/tf/keras/metrics.
Metrics are similar to objective functions, with the only difference that they are
not used for training a model, but only for evaluating the model. However, it is
important to understand the difference between metrics and objective functions.
As discussed, the loss function is used to optimize your network. This is the
function minimized by the selected optimizer. Instead, a metric is used to judge the
performance of your network. This is only for you to run an evaluation on and it
should be separated from the optimization process. On some occasions, it would
be ideal to directly optimize for a specific metric. However, some metrics are not
differentiable with respect to their inputs, which precludes them from being used
directly.
When compiling a model in TensorFlow 2.0, it is possible to select the optimizer,
the loss function, and the metric used together with a given model:
# Compiling the model.
model.compile(optimizer='SGD',
loss='categorical_crossentropy',
metrics=['accuracy'])
[ 18 ]