22.02.2024 Views

Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Output

tensor([[[[-4.2000, -6.6859, -4.9735, -3.5615],

[-1.2363, 0.5150, -1.8602, -4.7287],

[-2.1209, -4.1894, -4.3694, -5.5897],

[-4.3954, -6.1578, -4.5968, -5.0000]]]],

grad_fn=<MkldnnConvolutionBackward>)

These results are gibberish now (and yours are going to be different than mine)

because the convolutional module randomly initializes the weights representing

the kernel / filter.

That’s the whole point of the convolutional module: It will learn

the kernel / filter on its own.

In traditional computer vision, people would develop different

filters for different purposes: blurring, sharpening, edge

detection, and so on.

But, instead of being clever and trying to manually devise a filter

that does the trick for a given problem, why not outsource the

filter definition to the neural network as well? This way, the

network will come up with filters that highlight features that are

relevant to the task at hand.

It’s no surprise that the resulting image shows a grad_fn attribute

now: It will be used to compute gradients so the network can

actually learn how to change the weights representing the filter.

"Can we tell it to learn multiple filters at once?"

Sure we can; that’s the role of the out_channels argument. If we set it to 2, it will

generate two (randomly initialized) filters:

conv_multiple = nn.Conv2d(

in_channels=1, out_channels=2, kernel_size=3, stride=1

)

conv_multiple.weight

354 | Chapter 5: Convolutions

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!