Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub
the filters learned by the model produce the features that will feed theclassifier part• computing accuracy for a multiclass classification problem• creating a static method to apply a function to all the mini-batches in a dataloaderCongratulations: You took one big step toward being able to tackle manycomputer vision problems. This chapter introduced the fundamental conceptsrelated to (almost) all things convolutional. We still need to add some more tricks toour arsenal, so we can make our models even more powerful. In the next chapter,we’ll learn about convolutions over multiple channels, using dropout layers toregularize a model, finding a learning rate, and the inner workings of optimizers.[88] https://github.com/dvgodoy/PyTorchStepByStep/blob/master/Chapter05.ipynb[89] https://colab.research.google.com/github/dvgodoy/PyTorchStepByStep/blob/master/Chapter05.ipynb[90] https://en.wikipedia.org/wiki/Convolution[91] https://en.wikipedia.org/wiki/Kernel_(image_processing)[92] https://bit.ly/3sJ7Nn7[93] https://realpython.com/primer-on-python-decorators/Recap | 415
Chapter 6Rock, Paper, ScissorsSpoilersIn this chapter, we will:• standardize an image dataset• train a model to predict rock, paper, scissors poses from hand images• use dropout layers to regularize the model• learn how to find a learning rate to train the model• understand how the Adam optimizer uses adaptive learning rates• capture gradients and parameters to visualize their evolution during training• understand how momentum and Nesterov momentum work• use schedulers to implement learning rate changes during trainingJupyter NotebookThe Jupyter notebook corresponding to Chapter 6 [94] is part of the official DeepLearning with PyTorch Step-by-Step repository on GitHub. You can also run itdirectly in Google Colab [95] .If you’re using a local installation, open your terminal or Anaconda prompt andnavigate to the PyTorchStepByStep folder you cloned from GitHub. Then, activatethe pytorchbook environment and run jupyter notebook:$ conda activate pytorchbook(pytorchbook)$ jupyter notebookIf you’re using Jupyter’s default settings, this link should open Chapter 6’snotebook. If not, just click on Chapter06.ipynb in your Jupyter’s home page.ImportsFor the sake of organization, all libraries needed throughout the code used in any416 | Chapter 6: Rock, Paper, Scissors
- Page 390 and 391: A pooling kernel of two-by-two resu
- Page 392 and 393: Outputtensor([[22., 23., 11., 24.,
- Page 394 and 395: Figure 5.15 - LeNet-5 architectureS
- Page 396 and 397: • second block: produces 16-chann
- Page 398 and 399: Transformed Dataset1 class Transfor
- Page 400 and 401: LossNew problem, new loss. Since we
- Page 402 and 403: Outputtensor([4.0000, 1.0000, 0.500
- Page 404 and 405: The loss only considers the predict
- Page 406 and 407: Outputtensor([[-1.5229, -0.3146, -2
- Page 408 and 409: IMPORTANT: I can’t stress this en
- Page 410 and 411: figures at the beginning of this ch
- Page 412 and 413: The three units in the output layer
- Page 414 and 415: StepByStep Method@staticmethoddef _
- Page 416 and 417: The meow() method is totally indepe
- Page 418 and 419: StepByStep Methoddef visualize_filt
- Page 420 and 421: dummy_model = nn.Linear(1, 1)dummy_
- Page 422 and 423: dummy_listOutput[(Linear(in_feature
- Page 424 and 425: Output{Conv2d(1, 1, kernel_size=(3,
- Page 426 and 427: will be the externally defined vari
- Page 428 and 429: Removing Hookssbs_cnn1.remove_hooks
- Page 430 and 431: return figsetattr(StepByStep, 'visu
- Page 432 and 433: Figure 5.22 - Feature maps (classif
- Page 434 and 435: classification: The predicted class
- Page 436 and 437: convolutional layers to our model a
- Page 438 and 439: Capturing Outputsfeaturizer_layers
- Page 442 and 443: given chapter are imported at its v
- Page 444 and 445: Data PreparationThe data preparatio
- Page 446 and 447: model anyway. We’ll use it to com
- Page 448 and 449: StepByStep Method@staticmethoddef m
- Page 450 and 451: "What’s wrong with the colors?"Th
- Page 452 and 453: three_channel_filter = np.array([[[
- Page 454 and 455: Fancier Model (Constructor)class CN
- Page 456 and 457: Fancier Model (Classifier)def class
- Page 458 and 459: torch.manual_seed(44)dropping_model
- Page 460 and 461: Outputtensor([0.1000, 0.2000, 0.300
- Page 462 and 463: Figure 6.8 - Output distribution fo
- Page 464 and 465: Adaptive moment estimation (Adam) u
- Page 466 and 467: torch.manual_seed(13)# Model Config
- Page 468 and 469: Outputtorch.Size([5, 3, 3, 3])Its s
- Page 470 and 471: Choosing a learning rate that works
- Page 472 and 473: Higher-Order Learning Rate Function
- Page 474 and 475: Perfect! Now let’s build the actu
- Page 476 and 477: ax.set_xlabel('Learning Rate')ax.se
- Page 478 and 479: LRFinderThe function we’ve implem
- Page 480 and 481: value in our moving average has an
- Page 482 and 483: Figure 6.15 - Distribution of weigh
- Page 484 and 485: In code, the implementation of the
- Page 486 and 487: As expected, the EWMA without corre
- Page 488 and 489: optimizer = optim.Adam(model.parame
Chapter 6
Rock, Paper, Scissors
Spoilers
In this chapter, we will:
• standardize an image dataset
• train a model to predict rock, paper, scissors poses from hand images
• use dropout layers to regularize the model
• learn how to find a learning rate to train the model
• understand how the Adam optimizer uses adaptive learning rates
• capture gradients and parameters to visualize their evolution during training
• understand how momentum and Nesterov momentum work
• use schedulers to implement learning rate changes during training
Jupyter Notebook
The Jupyter notebook corresponding to Chapter 6 [94] is part of the official Deep
Learning with PyTorch Step-by-Step repository on GitHub. You can also run it
directly in Google Colab [95] .
If you’re using a local installation, open your terminal or Anaconda prompt and
navigate to the PyTorchStepByStep folder you cloned from GitHub. Then, activate
the pytorchbook environment and run jupyter notebook:
$ conda activate pytorchbook
(pytorchbook)$ jupyter notebook
If you’re using Jupyter’s default settings, this link should open Chapter 6’s
notebook. If not, just click on Chapter06.ipynb in your Jupyter’s home page.
Imports
For the sake of organization, all libraries needed throughout the code used in any
416 | Chapter 6: Rock, Paper, Scissors