22.02.2024 Views

Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

size = 5

weight = torch.ones(size) * 0.2

F.conv1d(torch.as_tensor(temperatures).float().view(1, 1, -1),

weight=weight.view(1, 1, -1))

Output

tensor([[[8.4000, 8.0000, 6.4000, 3.4000, 2.2000,

1.8000, 2.0000, 1.8000, 2.0000]]])

Does it look familiar? That’s a moving average, just like those we used in Chapter 6.

"Does it mean every 1D convolution is a moving average?"

Well, kinda … in the functional form above, we had to provide the weights, but, as

expected, the corresponding module (nn.Conv1d) will learn the weights itself. Since

there is no requirement that the weights must add up to one, it won’t be a moving

average but rather a moving weighted sum.

Moreover, it is very unlikely we’ll use it over a single feature like in the example

above. Things get more interesting as we include more features to be convolved

with the filter, which brings us to the next topic…

Shapes

The shapes topic, one more time, I know—unfortunately, there is no escape from it.

In Chapter 4 we discussed the NCHW shape for images:

• N stands for the Number of images (in a mini-batch, for instance)

• C stands for the number of Channels (or filters) in each image

• H stands for each image’s Height

• W stands for each image’s Width

For sequences, the shape should be NCL:

• N stands for the Number of sequences (in a mini-batch, for instance)

• C stands for the number of Channels (or filters) in each element of the

sequence

1D Convolutions | 671

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!