09.05.2023 Views

pdfcoffee

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Advanced Convolutional Neural Networks

Of course, it is possible to use multiple filters and each filter will act as a feature

identifier. For instance, for images the filter can identify edges, colors, lines, and

curves. The key intuition is to treat the filter values as weights and fine-tune them

during training via backpropagation.

A convolution layer can be configured by using the following config parameters:

• Kernel size: It is the field of view of the convolution

• Stride: It is the step size of the kernel when we traverse the image

• Padding: Defines how the border of our sample is handled

Dilated convolution

Dilated convolutions (or Atrous convolutions) introduce another config parameter:

• Dilation rate: It is the spacing between the values in a kernel

Dilated convolutions are used in many contexts including audio processing with

WaveNet.

Transposed convolution

Transposed convolution is a transformation going in the opposite direction of

a normal convolution. For instance this can be useful to project feature maps

into a higher-dimensional space or for building convolutional autoencoders (see

Chapter 9, Autoencoders). One way to think about Transposed convolution is to

compute the output shape of a normal CNN for a given input shape first. Then

we invert input and output shapes with the transposed convolution. TensorFlow

2.0 supports Transposed convolutions with Conv2DTranspose layers, which can

be used for instance in GANs (see Chapter 6, Generative Adversarial Networks) for

generating images.

Separable convolution

Separable convolution aims at separating the kernel into multiple steps. Let the

convolution be y = conv(x,k) where y is the output, x is the input, and k is the

kernel. Let's assume the kernel is separable, for example, k = k1.k2 where "." is the

dot product. In this case, instead of doing a 2-dimensions convolution with k, we

can get to the same result by doing two 1-dimension convolutions with k1 and k2.

Separable convolutions are frequently used to save on computation resources.

[ 184 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!