Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

peiying410632
from peiying410632 More from this publisher
22.02.2024 Views

dummy_listOutput[(Linear(in_features=1, out_features=1, bias=True),(tensor([0.3000]),),tensor([-0.7514], grad_fn=<AddBackward0>))]Now we’re talking! Here is the tuple we were expecting! If you call the model onceagain, it will append yet another tuple to the list, and so on and so forth. This hookis going to be hooked to our model until it is explicitly removed (hence the need tokeep the handles). To remove a hook, you can simply call its remove() method:dummy_handle.remove()And the hook goes bye-bye! But we did not lose the collected information, sinceour variable, dummy_list, was defined outside the hook function.Look at the first element of the tuple: It is an instance of a model (or layer). Even ifwe use a Sequential model and name the layers, the names won’t make it to thehook function. So we need to be clever here and make the association ourselves.Let’s get back to our real model now. We can get a list of all its named modules byusing the appropriate method: named_modules() (what else could it be?!).modules = list(sbs_cnn1.model.named_modules())modulesVisualizing Filters and More! | 397

Output[('', Sequential((conv1): Conv2d(1, 1, kernel_size=(3, 3), stride=(1, 1))(relu1): ReLU()(maxp1): MaxPool2d(kernel_size=2, stride=2, padding=0,dilation=1, ceil_mode=False)(flatten): Flatten()(fc1): Linear(in_features=16, out_features=10, bias=True)(relu2): ReLU()(fc2): Linear(in_features=10, out_features=3, bias=True))),('conv1', Conv2d(1, 1, kernel_size=(3, 3), stride=(1, 1))),('relu1', ReLU()),('maxp1',MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1,ceil_mode=False)),('flatten', Flatten()),('fc1', Linear(in_features=16, out_features=10, bias=True)),('relu2', ReLU()),('fc2', Linear(in_features=10, out_features=3, bias=True))]The first, unnamed, module is the whole model itself. The other, named, modulesare its layers. Any of those layers may be one of the inputs of the hook function. So,we need to be able to look up the layer name, given the corresponding layerinstance—if only there was something we could use to easily look up values, right?layer_names = {layer: name for name, layer in modules[1:]}layer_names398 | Chapter 5: Convolutions

dummy_list

Output

[(Linear(in_features=1, out_features=1, bias=True),

(tensor([0.3000]),),

tensor([-0.7514], grad_fn=<AddBackward0>))]

Now we’re talking! Here is the tuple we were expecting! If you call the model once

again, it will append yet another tuple to the list, and so on and so forth. This hook

is going to be hooked to our model until it is explicitly removed (hence the need to

keep the handles). To remove a hook, you can simply call its remove() method:

dummy_handle.remove()

And the hook goes bye-bye! But we did not lose the collected information, since

our variable, dummy_list, was defined outside the hook function.

Look at the first element of the tuple: It is an instance of a model (or layer). Even if

we use a Sequential model and name the layers, the names won’t make it to the

hook function. So we need to be clever here and make the association ourselves.

Let’s get back to our real model now. We can get a list of all its named modules by

using the appropriate method: named_modules() (what else could it be?!).

modules = list(sbs_cnn1.model.named_modules())

modules

Visualizing Filters and More! | 397

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!