22.02.2024 Views

Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Figure 5.22 - Feature maps (classifier)

The hidden layer performed an affine transformation (remember those?), reducing

the dimensionality from sixteen to ten dimensions. Next, the activation function, a

ReLU, eliminated negative values, resulting in the "activated" feature space in the

middle row.

Finally, the output layer used these ten values to compute three logits, one for

each class. Even without transforming them into probabilities, we know that the

largest logit wins. The largest logit is shown as the brightest pixel, so we can tell

which class was predicted by looking at the three shades of gray and picking the

index of the brightest one.

The classifier got eight out of ten right. It made wrong predictions for images #6

and #8. Unsurprisingly, these are the two images that got their vertical lines

suppressed. The filter doesn’t seem to work so well whenever the vertical line is

too close to the left edge of the image.

"How good is the model actually?"

Good question! Let’s check it out.

Accuracy

In Chapter 3, we made predictions using our own predict() method and used

Scikit-Learn’s metrics module to evaluate them. Now, let’s build a method that also

takes features (x) and labels (y), as returned by a data loader, and that takes all

necessary steps to produce two values for each class: the number of correct

predictions and the number of data points in that class.

Visualizing Filters and More! | 407

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!