16.03.2021 Views

Advanced Deep Learning with Keras

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Deep Neural Networks

To prevent the number of feature maps from increasing to the point of being

computationally inefficient, DenseNet introduced the Bottleneck layer as shown

in Figure 2.4.2. The idea is that after every concatenation; a 1 × 1 convolution with

a filter size equal to 4k is now applied. This dimensionality reduction technique

prevents the number of feature maps to be processed by Conv2D(3) from rapidly

increasing.

The Bottleneck layer then modifies the DenseNet layer as BN-ReLU-Conv2D(1)-BN-

ReLU-Conv2D(3), instead of just BN-ReLU-Conv2D(3). We've included the kernel size

as an argument of Conv2D for clarity. With the Bottleneck layer, every Conv2D(3) is

processing just the 4k feature maps instead of ( l − 1) × k + k0

for layer l. For example,

for the 101-layer network, the input to the last Conv2D(3) is still 48 feature maps for

k = 12 instead of 1224 as computed previously:

Figure 2.4.3: The transition layer in between two Dense blocks

To solve the problem in feature maps size mismatch, DenseNet divides a deep

network into multiple dense blocks that are joined together by transition layers

as shown in the preceding figure. Within each dense block, the feature map size

(that is, width and height) will remain constant.

The role of the transition layer is to transition from one feature map size to a smaller

feature map size between two dense blocks. The reduction in size is usually half. This

is accomplished by the average pooling layer. For example, an AveragePooling2D

with default pool_size=2 reduces the size from (64, 64, 256) to (32, 32, 256). The

input to the transition layer is the output of the last concatenation layer in the

previous dense block.

[ 64 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!