16.03.2021 Views

Advanced Deep Learning with Keras

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Chapter 2

The following listing shows the partial ResNet implementation within Keras. The

code has been contributed to the Keras GitHub repository. From Table 2.2.2 we can

also see that by modifying the value of n, we're able to increase the depth of the

networks. For example, for n = 18, we already have ResNet110, a deep network

with 110 layers. To build ResNet20, we use n = 3:

n = 3

# model version

# orig paper: version = 1 (ResNet v1),

# Improved ResNet: version = 2 (ResNet v2)

version = 1

# computed depth from supplied model parameter n

if version == 1:

depth = n * 6 + 2

elif version == 2:

depth = n * 9 + 2

if version == 2:

model = resnet_v2(input_shape=input_shape, depth=depth)

else:

model = resnet_v1(input_shape=input_shape, depth=depth)

The resnet_v1() method is a model builder for ResNet. It uses a utility function,

resnet_layer() to help build the stack of Conv2D-BN-ReLU.

It's referred to as version 1, as we will see in the next section, an improved ResNet

was proposed, and that has been called ResNet version 2, or v2. Over ResNet,

ResNet v2 has an improved residual block design resulting in better performance.

Layers

Output

Size

Filter

Size

Operations

Convolution 32 × 32 16 3×

3 Conv2D

Residual Block 32 × 32 ⎧3×

3 Conv2D⎫ ⎨

⎬ × n

(1)

⎩3×

3 Conv2D⎭

Transition Layer 32 × 32 { 1× 1 Conv2 D, strides = 2}

(1)

16 × 16

Residual Block 16 × 16 32 ⎧3× 3 Conv2 Dstrides , = 2if 1st Conv2D⎫ ⎨

⎬ × n

(2)

⎩3×

3 Conv2D

[ 55 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!