pdfcoffee
Chapter 4Figure 12: An example of CIFAR-10 imagesThe goal is to recognize previously unseen images and assign them to one of the 10classes. Let us define a suitable deep net.First of all, we import a number of useful modules, define a few constants, and loadthe dataset (the full code including the load operations is available online):import tensorflow as tffrom tensorflow.keras import datasets, layers, models, optimizers# CIFAR_10 is a set of 60K images 32x32 pixels on 3 channelsIMG_CHANNELS = 3IMG_ROWS = 32IMG_COLS = 32# constantBATCH_SIZE = 128EPOCHS = 20CLASSES = 10VERBOSE = 1VALIDATION_SPLIT = 0.2OPTIM = tf.keras.optimizers.RMSprop()[ 123 ]
Convolutional Neural NetworksOur net will learn 32 convolutional filters, each with a 3×3 size. The output dimensionis the same one as the input shape, so it will be 32×32 and the activation function usedis a ReLU function, which is a simple way of introducing non-linearity. After that wehave a max pooling operation with pool size 2×2 and a Dropout of 25%:# define the convnetdef build(input_shape, classes):model = models.Sequential()model.add(layers.Convolution2D(32, (3, 3), activation='relu',input_shape=input_shape))model.add(layers.MaxPooling2D(pool_size=(2, 2)))model.add(layers.Dropout(0.25))The next stage in the deep pipeline is a dense network with 512 units and ReLUactivation followed by a dropout at 50% and by a softmax layer with 10 classesas output, one for each category:model.add(layers.Flatten())model.add(layers.Dense(512, activation='relu'))model.add(layers.Dropout(0.5))model.add(layers.Dense(classes, activation='softmax'))return modelAfter defining the network, we can train the model. In this case, we split the dataand compute a validation set in addition to the training and testing sets. The trainingis used to build our models, the validation is used to select the best performingapproach, while the test set is used to check the performance of our best modelson fresh, unseen data:# use TensorBoard, princess Aurora!callbacks = [# Write TensorBoard logs to './logs' directorytf.keras.callbacks.TensorBoard(log_dir='./logs')]# trainmodel.compile(loss='categorical_crossentropy', optimizer=OPTIM,metrics=['accuracy'])model.fit(X_train, y_train, batch_size=BATCH_SIZE,epochs=EPOCHS, validation_split=VALIDATION_SPLIT,verbose=VERBOSE, callbacks=callbacks)score = model.evaluate(X_test, y_test,batch_size=BATCH_SIZE, verbose=VERBOSE)print("\nTest score:", score[0])print('Test accuracy:', score[1])[ 124 ]
- Page 108 and 109: Chapter 2There, tf.feature_column.n
- Page 110 and 111: Chapter 2print (dz_dx)print (dy_dx)
- Page 112 and 113: Chapter 2In our toy example we use
- Page 114 and 115: Chapter 2For multi-machine training
- Page 116 and 117: Chapter 25. Use tf.layers modules t
- Page 118 and 119: Chapter 2Keras or tf.keras?Another
- Page 120: • tf.data can be used to load mod
- Page 123 and 124: RegressionLet us imagine a simpler
- Page 125 and 126: RegressionTake a look at the last t
- Page 127 and 128: Regression3. Now, we calculate the
- Page 129 and 130: RegressionIn the next section we wi
- Page 131 and 132: Regression2. Now, we define the fea
- Page 133 and 134: Regression2. Download the dataset:(
- Page 135 and 136: RegressionThe following is the Tens
- Page 137 and 138: RegressionIn regression the aim is
- Page 139 and 140: RegressionThe Estimator outputs the
- Page 141 and 142: RegressionThe following is the grap
- Page 143 and 144: RegressionReferencesHere are some g
- Page 145 and 146: Convolutional Neural NetworksIn thi
- Page 147 and 148: Convolutional Neural NetworksIn thi
- Page 149 and 150: Convolutional Neural NetworksIn oth
- Page 151 and 152: Convolutional Neural NetworksThen w
- Page 153 and 154: Convolutional Neural NetworksHoweve
- Page 155 and 156: Convolutional Neural NetworksPlotti
- Page 157: Convolutional Neural NetworksIn gen
- Page 161 and 162: Convolutional Neural NetworksThese
- Page 163 and 164: Convolutional Neural NetworksSo, we
- Page 165 and 166: Convolutional Neural NetworksEach i
- Page 167 and 168: Convolutional Neural NetworksVery d
- Page 169 and 170: Convolutional Neural NetworksRecogn
- Page 171 and 172: Convolutional Neural NetworksIf we
- Page 173 and 174: Convolutional Neural NetworksRefere
- Page 175 and 176: Advanced Convolutional Neural Netwo
- Page 177 and 178: Advanced Convolutional Neural Netwo
- Page 179 and 180: Advanced Convolutional Neural Netwo
- Page 181 and 182: Advanced Convolutional Neural Netwo
- Page 183 and 184: Advanced Convolutional Neural Netwo
- Page 185 and 186: Advanced Convolutional Neural Netwo
- Page 187 and 188: Advanced Convolutional Neural Netwo
- Page 189 and 190: Advanced Convolutional Neural Netwo
- Page 191 and 192: Advanced Convolutional Neural Netwo
- Page 193 and 194: Advanced Convolutional Neural Netwo
- Page 195 and 196: Advanced Convolutional Neural Netwo
- Page 197 and 198: Advanced Convolutional Neural Netwo
- Page 199 and 200: Advanced Convolutional Neural Netwo
- Page 201 and 202: Advanced Convolutional Neural Netwo
- Page 203 and 204: Advanced Convolutional Neural Netwo
- Page 205 and 206: Advanced Convolutional Neural Netwo
- Page 207 and 208: Advanced Convolutional Neural Netwo
Chapter 4
Figure 12: An example of CIFAR-10 images
The goal is to recognize previously unseen images and assign them to one of the 10
classes. Let us define a suitable deep net.
First of all, we import a number of useful modules, define a few constants, and load
the dataset (the full code including the load operations is available online):
import tensorflow as tf
from tensorflow.keras import datasets, layers, models, optimizers
# CIFAR_10 is a set of 60K images 32x32 pixels on 3 channels
IMG_CHANNELS = 3
IMG_ROWS = 32
IMG_COLS = 32
# constant
BATCH_SIZE = 128
EPOCHS = 20
CLASSES = 10
VERBOSE = 1
VALIDATION_SPLIT = 0.2
OPTIM = tf.keras.optimizers.RMSprop()
[ 123 ]