Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

peiying410632
from peiying410632 More from this publisher
22.02.2024 Views

StepByStep Methoddef visualize_filters(self, layer_name, **kwargs):try:# Gets the layer object from the modellayer = self.modelfor name in layer_name.split('.'):layer = getattr(layer, name)# We are only looking at filters for 2D convolutionsif isinstance(layer, nn.Conv2d):# Takes the weight informationweights = layer.weight.data.cpu().numpy()# weights -> (channels_out (filter), channels_in, H, W)n_filters, n_channels, _, _ = weights.shape# Builds a figuresize = (2 * n_channels + 2, 2 * n_filters)fig, axes = plt.subplots(n_filters, n_channels,figsize=size)axes = np.atleast_2d(axes)axes = axes.reshape(n_filters, n_channels)# For each channel_out (filter)for i in range(n_filters):StepByStep._visualize_tensors(axes[i, :],1weights[i],2layer_name=f'Filter #{i}',title='Channel')for ax in axes.flat:ax.label_outer()fig.tight_layout()return figexcept AttributeError:returnsetattr(StepByStep, 'visualize_filters', visualize_filters)1 The i-th row of subplots corresponds to a particular filter; each row has as manycolumns as there are input channels.Visualizing Filters and More! | 393

2 The i-th element of the weights corresponds to the i-th filter, which learneddifferent weights to convolve each of the input channels.OK, let’s see what the filter looks like:fig = sbs_cnn1.visualize_filters('conv1', cmap='gray')Figure 5.19 - Our model’s only filterIs this a filter one could come up with to try distinguishing between the differentclasses we have? Maybe, but it is not easy to grasp, just by looking at this filter, whatit is effectively accomplishing.To really understand the effect this filter has on each image, we need to visualizethe intermediate values produced by our model, namely, the output of each andevery layer!"How can we visualize the output of each layer? Do we have tomodify our StepByStep class to capture those?"It is much easier than that: We can use hooks!HooksA hook is simply a way to force a model to execute a function either after itsforward pass or after its backward pass. Hence, there are forward hooks andbackward hooks. We’re using only forward hooks here, but the idea is the same forboth.First, we create a function that is going to be, guess what, hooked to the forwardpass. Let’s illustrate the process with a dummy model:394 | Chapter 5: Convolutions

2 The i-th element of the weights corresponds to the i-th filter, which learned

different weights to convolve each of the input channels.

OK, let’s see what the filter looks like:

fig = sbs_cnn1.visualize_filters('conv1', cmap='gray')

Figure 5.19 - Our model’s only filter

Is this a filter one could come up with to try distinguishing between the different

classes we have? Maybe, but it is not easy to grasp, just by looking at this filter, what

it is effectively accomplishing.

To really understand the effect this filter has on each image, we need to visualize

the intermediate values produced by our model, namely, the output of each and

every layer!

"How can we visualize the output of each layer? Do we have to

modify our StepByStep class to capture those?"

It is much easier than that: We can use hooks!

Hooks

A hook is simply a way to force a model to execute a function either after its

forward pass or after its backward pass. Hence, there are forward hooks and

backward hooks. We’re using only forward hooks here, but the idea is the same for

both.

First, we create a function that is going to be, guess what, hooked to the forward

pass. Let’s illustrate the process with a dummy model:

394 | Chapter 5: Convolutions

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!