Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

peiying410632
from peiying410632 More from this publisher
22.02.2024 Views

return figsetattr(StepByStep, 'visualize_outputs', visualize_outputs)Then, let’s use the method above to plot the feature maps for the layers in thefeaturizer part of our model:featurizer_layers = ['conv1', 'relu1', 'maxp1', 'flatten']with plt.style.context('seaborn-white'):fig = sbs_cnn1.visualize_outputs(featurizer_layers)Figure 5.20 - Feature maps (featurizer)Figure 5.21 - Mini-batch of images (reproduced here for an easier comparison)Looks cool, right? Even though I’ve plotted the images in the first four rows withthe same size, they have different dimensions, as indicated by the row labels on theleft. The shade of gray is also computed per row: The maximum (white) andminimum (black) values were computed across the ten images produced by a givenlayer; otherwise, some rows would be too dark (the ranges vary a lot from one layerto the next).What can we learn from these images? First, convolving the learned filter with theinput image produces some interesting results:Visualizing Filters and More! | 405

• For diagonals tilted to the left (images #0, #1, #2, and #7), the filter seems tosuppress the diagonal completely.• For parallel lines (only verticals in the example above, images #3 to #6, and #8),the filter produces a striped pattern, brighter to the left of the original line,darker to its right.• For diagonals tilted to the right (only image #9), the filter produces a thickerline with multiple shades.Then, the ReLU activation function removes the negative values. Unfortunately,after this operation, images #6 and #8 (parallel vertical lines) had all linessuppressed and seem indistinguishable from images #0, #1, #2, and #7 (diagonalstilted to the left).Next, max pooling reduces the dimensions of the images, and they get flattened torepresent sixteen features.Now, look at the flattened features. That’s what the classifier will look at to try tosplit the images into three different classes. For a relatively simple problem likethis, we can pretty much see the patterns there. Let’s see what the classifier layerscan make of it.Visualizing Classifier LayersThe second part of our model, which is aptly called a classifier, has the typicalstructure: a hidden layer (FC1), an activation function, and an output layer (FC2).Let’s visualize the outputs of each and every one of the layers that were capturedby our hook function for the same ten images:classifier_layers = ['fc1', 'relu2', 'fc2']with plt.style.context('seaborn-white'):fig = sbs_cnn1.visualize_outputs(classifier_layers, y=labels_batch, yhat=predicted)406 | Chapter 5: Convolutions

return fig

setattr(StepByStep, 'visualize_outputs', visualize_outputs)

Then, let’s use the method above to plot the feature maps for the layers in the

featurizer part of our model:

featurizer_layers = ['conv1', 'relu1', 'maxp1', 'flatten']

with plt.style.context('seaborn-white'):

fig = sbs_cnn1.visualize_outputs(featurizer_layers)

Figure 5.20 - Feature maps (featurizer)

Figure 5.21 - Mini-batch of images (reproduced here for an easier comparison)

Looks cool, right? Even though I’ve plotted the images in the first four rows with

the same size, they have different dimensions, as indicated by the row labels on the

left. The shade of gray is also computed per row: The maximum (white) and

minimum (black) values were computed across the ten images produced by a given

layer; otherwise, some rows would be too dark (the ranges vary a lot from one layer

to the next).

What can we learn from these images? First, convolving the learned filter with the

input image produces some interesting results:

Visualizing Filters and More! | 405

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!