Advanced Deep Learning with Keras

fourpersent2020
from fourpersent2020 More from this publisher
16.03.2021 Views

Chapter 2The complete code is available on GitHub: (https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras).It's often difficult to exactly duplicate the implementation of the original paper,especially in the optimizer used and data augmentation, as there are slightdifferences in the performance of the Keras ResNet implementation in thisbook and the model in the original paper.ResNet v2After the release of the second paper on ResNet [4], the original model presentedin the previous section has been known as ResNet v1. The improved ResNet iscommonly called ResNet v2. The improvement is mainly found in the arrangementof layers in the residual block as shown in following figure.The prominent changes in ResNet v2 are:• The use of a stack of 1 × 1 - 3 × 3 - 1 × 1 BN-ReLU-Conv2D• Batch normalization and ReLU activation come before 2D convolutionFigure 2.3.1: A comparison of residual blocks between ResNet v1 and ResNet v2[ 59 ]

Deep Neural NetworksResNet v2 is also implemented in the same code as resnet-cifar10-2.2.1.py:def resnet_v2(input_shape, depth, num_classes=10):if (depth - 2) % 9 != 0:raise ValueError('depth should be 9n+2 (eg 56 or 110 in [b])')# Start model definition.num_filters_in = 16num_res_blocks = int((depth - 2) / 9)inputs = Input(shape=input_shape)# v2 performs Conv2D with BN-ReLU on input# before splitting into 2 pathsx = resnet_layer(inputs=inputs,num_filters=num_filters_in,conv_first=True)# Instantiate the stack of residual unitsfor stage in range(3):for res_block in range(num_res_blocks):activation = 'relu'batch_normalization = Truestrides = 1if stage == 0:num_filters_out = num_filters_in * 4if res_block == 0: # first layer and first stageactivation = Nonebatch_normalization = Falseelse:num_filters_out = num_filters_in * 2if res_block == 0: # 1st layer but not 1st stagestrides = 2 # downsample# bottleneck residual unity = resnet_layer(inputs=x,num_filters=num_filters_in,kernel_size=1,strides=strides,activation=activation,batch_normalization=batch_normalization,conv_first=False)y = resnet_layer(inputs=y,num_filters=num_filters_in,conv_first=False)y = resnet_layer(inputs=y,[ 60 ]

Chapter 2

The complete code is available on GitHub: (https://github.com/PacktPublishing/

Advanced-Deep-Learning-with-Keras).

It's often difficult to exactly duplicate the implementation of the original paper,

especially in the optimizer used and data augmentation, as there are slight

differences in the performance of the Keras ResNet implementation in this

book and the model in the original paper.

ResNet v2

After the release of the second paper on ResNet [4], the original model presented

in the previous section has been known as ResNet v1. The improved ResNet is

commonly called ResNet v2. The improvement is mainly found in the arrangement

of layers in the residual block as shown in following figure.

The prominent changes in ResNet v2 are:

• The use of a stack of 1 × 1 - 3 × 3 - 1 × 1 BN-ReLU-Conv2D

• Batch normalization and ReLU activation come before 2D convolution

Figure 2.3.1: A comparison of residual blocks between ResNet v1 and ResNet v2

[ 59 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!