Advanced Deep Learning with Keras
Chapter 2The complete code is available on GitHub: (https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras).It's often difficult to exactly duplicate the implementation of the original paper,especially in the optimizer used and data augmentation, as there are slightdifferences in the performance of the Keras ResNet implementation in thisbook and the model in the original paper.ResNet v2After the release of the second paper on ResNet [4], the original model presentedin the previous section has been known as ResNet v1. The improved ResNet iscommonly called ResNet v2. The improvement is mainly found in the arrangementof layers in the residual block as shown in following figure.The prominent changes in ResNet v2 are:• The use of a stack of 1 × 1 - 3 × 3 - 1 × 1 BN-ReLU-Conv2D• Batch normalization and ReLU activation come before 2D convolutionFigure 2.3.1: A comparison of residual blocks between ResNet v1 and ResNet v2[ 59 ]
Deep Neural NetworksResNet v2 is also implemented in the same code as resnet-cifar10-2.2.1.py:def resnet_v2(input_shape, depth, num_classes=10):if (depth - 2) % 9 != 0:raise ValueError('depth should be 9n+2 (eg 56 or 110 in [b])')# Start model definition.num_filters_in = 16num_res_blocks = int((depth - 2) / 9)inputs = Input(shape=input_shape)# v2 performs Conv2D with BN-ReLU on input# before splitting into 2 pathsx = resnet_layer(inputs=inputs,num_filters=num_filters_in,conv_first=True)# Instantiate the stack of residual unitsfor stage in range(3):for res_block in range(num_res_blocks):activation = 'relu'batch_normalization = Truestrides = 1if stage == 0:num_filters_out = num_filters_in * 4if res_block == 0: # first layer and first stageactivation = Nonebatch_normalization = Falseelse:num_filters_out = num_filters_in * 2if res_block == 0: # 1st layer but not 1st stagestrides = 2 # downsample# bottleneck residual unity = resnet_layer(inputs=x,num_filters=num_filters_in,kernel_size=1,strides=strides,activation=activation,batch_normalization=batch_normalization,conv_first=False)y = resnet_layer(inputs=y,num_filters=num_filters_in,conv_first=False)y = resnet_layer(inputs=y,[ 60 ]
- Page 26 and 27: Chapter 1Figure 1.3.3: MLP MNIST di
- Page 28 and 29: Chapter 1model.add(Activation('soft
- Page 30 and 31: Chapter 1model.add(Activation('relu
- Page 32 and 33: Chapter 1As an example, l2 weight r
- Page 34 and 35: [ 17 ]Chapter 1How far the predicte
- Page 36 and 37: Chapter 1Figure 1.3.8: Plot of a fu
- Page 38 and 39: Chapter 1The highest test accuracy
- Page 40 and 41: Chapter 1Figure 1.3.9: The graphica
- Page 42 and 43: Chapter 1# image is processed as is
- Page 44 and 45: Chapter 1The computation involved i
- Page 46 and 47: Chapter 1Listing 1.4.2 shows a summ
- Page 48 and 49: Chapter 164-64-64 RMSprop Dropout(0
- Page 50 and 51: Chapter 1There are the two main dif
- Page 52 and 53: Chapter 1Layers Optimizer Regulariz
- Page 54: ConclusionThis chapter provided an
- Page 57 and 58: Deep Neural NetworksWhile this chap
- Page 59 and 60: Deep Neural Networks# reshape and n
- Page 61 and 62: Deep Neural NetworksEverything else
- Page 63 and 64: Deep Neural Networksfrom keras.util
- Page 65 and 66: Deep Neural NetworksFigure 2.1.3: T
- Page 67 and 68: Deep Neural NetworksHence, the netw
- Page 69 and 70: Deep Neural NetworksGenerally speak
- Page 71 and 72: Deep Neural NetworksIn the dataset,
- Page 73 and 74: Deep Neural NetworksTransition Laye
- Page 75: Deep Neural NetworksThere are some
- Page 79 and 80: Deep Neural Networks…if version =
- Page 81 and 82: Deep Neural NetworksTo prevent the
- Page 83 and 84: Deep Neural NetworksAverage Pooling
- Page 85 and 86: Deep Neural Networks# orig paper us
- Page 88 and 89: AutoencodersIn the previous chapter
- Page 90 and 91: Chapter 3The autoencoder has the te
- Page 92 and 93: Chapter 3Firstly, we're going to im
- Page 94 and 95: Chapter 3# reconstruct the inputout
- Page 96 and 97: Chapter 3Figure 3.2.2: The decoder
- Page 98 and 99: batch_size=32,model_name="autoencod
- Page 100 and 101: Chapter 3Figure 3.2.6: Digits gener
- Page 102 and 103: Chapter 3As shown in Figure 3.3.2,
- Page 104 and 105: Chapter 3image_size = x_train.shape
- Page 106 and 107: Chapter 3# Mean Square Error (MSE)
- Page 108 and 109: Chapter 3from keras.layers import R
- Page 110 and 111: Chapter 3# build the autoencoder mo
- Page 112 and 113: Chapter 3x_train,validation_data=(x
- Page 114: Chapter 3ConclusionIn this chapter,
- Page 117 and 118: Generative Adversarial Networks (GA
- Page 119 and 120: Generative Adversarial Networks (GA
- Page 121 and 122: Generative Adversarial Networks (GA
- Page 123 and 124: Generative Adversarial Networks (GA
- Page 125 and 126: Generative Adversarial Networks (GA
Chapter 2
The complete code is available on GitHub: (https://github.com/PacktPublishing/
Advanced-Deep-Learning-with-Keras).
It's often difficult to exactly duplicate the implementation of the original paper,
especially in the optimizer used and data augmentation, as there are slight
differences in the performance of the Keras ResNet implementation in this
book and the model in the original paper.
ResNet v2
After the release of the second paper on ResNet [4], the original model presented
in the previous section has been known as ResNet v1. The improved ResNet is
commonly called ResNet v2. The improvement is mainly found in the arrangement
of layers in the residual block as shown in following figure.
The prominent changes in ResNet v2 are:
• The use of a stack of 1 × 1 - 3 × 3 - 1 × 1 BN-ReLU-Conv2D
• Batch normalization and ReLU activation come before 2D convolution
Figure 2.3.1: A comparison of residual blocks between ResNet v1 and ResNet v2
[ 59 ]