Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

peiying410632
from peiying410632 More from this publisher
22.02.2024 Views

dummy_model = nn.Linear(1, 1)dummy_list = []def dummy_hook(layer, inputs, outputs):dummy_list.append((layer, inputs, outputs))The (forward) hook function takes three arguments:• a model (or layer)• a tensor representing the inputs taken by that model (or layer)• a tensor representing the outputs generated by that model (or layer)So, any function that takes three arguments, regardless of their names, can work asa hook. In our case (and in many other cases too), we would like to keep theinformation that goes through the hook function.You should use a variable (or variables) defined outside thehook function to store values.That’s the role of the dummy_list variable in the snippet above. Our dummy_hook()function is as basic as it gets: It simply appends a tuple of its three arguments to thedummy_list variable defined outside the hook function."How do you hook the hook function to the model?"There is a method for it, register_forward_hook(), which takes the hook functionand returns a handle, so we can keep track of the hooks attached to our model.dummy_handle = dummy_model.register_forward_hook(dummy_hook)dummy_handleOutput<torch.utils.hooks.RemovableHandle at 0x7fc9a003e190>Simple enough, right? Let’s see it in action:Visualizing Filters and More! | 395

dummy_x = torch.tensor([0.3])dummy_model.forward(dummy_x)Outputtensor([-0.7514], grad_fn=<AddBackward0>)It should add a new tuple to the dummy list, one containing a linear layer, an inputtensor (0.3), and an output tensor (-0.7514). By the way, your values are going to bedifferent than mine, since we didn’t bother to use a seed here.dummy_listOutput[]"Empty?! So it is not working?"GOTCHA! I deliberately used the model’s forward() method here to illustratesomething we’ve discussed much earlier, in Chapter 1:You should NOT call the forward(x) method! You should call thewhole model instead, as in model(x), to perform a forward pass.Otherwise, your hooks won’t work.Let’s do it right this time:dummy_model(dummy_x)Outputtensor([-0.7514], grad_fn=<AddBackward0>)396 | Chapter 5: Convolutions

dummy_model = nn.Linear(1, 1)

dummy_list = []

def dummy_hook(layer, inputs, outputs):

dummy_list.append((layer, inputs, outputs))

The (forward) hook function takes three arguments:

• a model (or layer)

• a tensor representing the inputs taken by that model (or layer)

• a tensor representing the outputs generated by that model (or layer)

So, any function that takes three arguments, regardless of their names, can work as

a hook. In our case (and in many other cases too), we would like to keep the

information that goes through the hook function.

You should use a variable (or variables) defined outside the

hook function to store values.

That’s the role of the dummy_list variable in the snippet above. Our dummy_hook()

function is as basic as it gets: It simply appends a tuple of its three arguments to the

dummy_list variable defined outside the hook function.

"How do you hook the hook function to the model?"

There is a method for it, register_forward_hook(), which takes the hook function

and returns a handle, so we can keep track of the hooks attached to our model.

dummy_handle = dummy_model.register_forward_hook(dummy_hook)

dummy_handle

Output

<torch.utils.hooks.RemovableHandle at 0x7fc9a003e190>

Simple enough, right? Let’s see it in action:

Visualizing Filters and More! | 395

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!