Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub
Chapter 2.1Going ClassySpoilersIn this chapter, we will:• define a class to handle model training• implement the constructor method• understand the difference between public, protected, and private methods ofa class• integrate the code we’ve developed so far into our class• instantiate our class and use it to run a classy pipelineJupyter NotebookThe Jupyter notebook corresponding to Chapter 2.1 [61] is part of the official DeepLearning with PyTorch Step-by-Step repository on GitHub. You can also run itdirectly in Google Colab [62] .If you’re using a local installation, open your terminal or Anaconda prompt andnavigate to the PyTorchStepByStep folder you cloned from GitHub. Then, activatethe pytorchbook environment and run jupyter notebook:$ conda activate pytorchbook(pytorchbook)$ jupyter notebookIf you’re using Jupyter’s default settings, this link should open Chapter 2.1’snotebook. If not, just click on Chapter02.1.ipynb on your Jupyter’s home page.ImportsFor the sake of organization, all libraries needed throughout the code used in anygiven chapter are imported at its very beginning. For this chapter, we’ll need thefollowing imports:Spoilers | 175
import numpy as npimport datetimeimport torchimport torch.optim as optimimport torch.nn as nnimport torch.functional as Ffrom torch.utils.data import DataLoader, TensorDataset, random_splitfrom torch.utils.tensorboard import SummaryWriterimport matplotlib.pyplot as plt%matplotlib inlineplt.style.use('fivethirtyeight')Going ClassySo far, the %%writefile magic has helped us to organize the code into three distinctparts: data preparation, model configuration, and model training. At the end ofChapter 2, though, we bumped into some of its limitations, like being unable tochoose a different number of epochs without having to edit the model trainingcode.Clearly, this situation is not ideal. We need to do better. We need to go classy; thatis, we need to build a class to handle the model training part.I am assuming you have a working knowledge of object-orientedprogramming (OOP) in order to benefit the most from thischapter. If that’s not the case, and if you didn’t do it in Chapter 1,now is the time to follow tutorials like Real Python’s "Object-Oriented Programming (OOP) in Python 3" [63] and "SuperchargeYour Classes With Python super()." [64]The ClassLet’s start by defining our class with a rather unoriginal name: StepByStep. We’restarting it from scratch: Either we don’t specify a parent class, or we inherit it fromthe fundamental object class. I personally prefer the latter, so our class definitionlooks like this:176 | Chapter 2.1: Going Classy
- Page 150 and 151: Let’s take a look at the code onc
- Page 152 and 153: Higher-Order FunctionsAlthough this
- Page 154 and 155: def exponentiation_builder(exponent
- Page 156 and 157: Apart from returning the loss value
- Page 158 and 159: Our code should look like this; see
- Page 160 and 161: There is no need to load the whole
- Page 162 and 163: but if we want to get serious about
- Page 164 and 165: How does this change our code so fa
- Page 166 and 167: Run - Model Training V2%run -i mode
- Page 168 and 169: piece of code that’s going to be
- Page 170 and 171: for it. We could do the same for th
- Page 172 and 173: EvaluationHow can we evaluate the m
- Page 174 and 175: And then, we update our model confi
- Page 176 and 177: Run - Model Training V4%run -i mode
- Page 178 and 179: Loading Extension# Load the TensorB
- Page 180 and 181: browser, you’ll likely see someth
- Page 182 and 183: model’s graph (not quite the same
- Page 184 and 185: Figure 2.5 - Scalars on TensorBoard
- Page 186 and 187: Define - Model Training V51 %%write
- Page 188 and 189: If, by any chance, you ended up wit
- Page 190 and 191: The procedure is exactly the same,
- Page 192 and 193: soon, so please bear with me for no
- Page 194 and 195: After recovering our model’s stat
- Page 196 and 197: Run - Model Configuration V31 # %lo
- Page 198 and 199: This is the general structure you
- Page 202 and 203: # A completely empty (and useless)
- Page 204 and 205: # These attributes are defined here
- Page 206 and 207: # Creates the train_step function f
- Page 208 and 209: # Builds function that performs a s
- Page 210 and 211: setattrThe setattr function sets th
- Page 212 and 213: See? We effectively modified the un
- Page 214 and 215: the random seed as arguments.This s
- Page 216 and 217: The current state of development of
- Page 218 and 219: Lossesdef plot_losses(self):fig = p
- Page 220 and 221: Run - Data Preparation V21 # %load
- Page 222 and 223: Model TrainingWe start by instantia
- Page 224 and 225: Making PredictionsLet’s make up s
- Page 226 and 227: OutputOrderedDict([('0.weight', ten
- Page 228 and 229: Run - Data Preparation V21 # %load
- Page 230 and 231: • defining our StepByStep class
- Page 232 and 233: import numpy as npimport torchimpor
- Page 234 and 235: Next, we’ll standardize the featu
- Page 236 and 237: Equation 3.1 - A linear regression
- Page 238 and 239: The odds ratio is given by the rati
- Page 240 and 241: As expected, probabilities that add
- Page 242 and 243: Sigmoid Functiondef sigmoid(z):retu
- Page 244 and 245: A picture is worth a thousand words
- Page 246 and 247: OutputOrderedDict([('linear.weight'
- Page 248 and 249: The first summation adds up the err
Chapter 2.1
Going Classy
Spoilers
In this chapter, we will:
• define a class to handle model training
• implement the constructor method
• understand the difference between public, protected, and private methods of
a class
• integrate the code we’ve developed so far into our class
• instantiate our class and use it to run a classy pipeline
Jupyter Notebook
The Jupyter notebook corresponding to Chapter 2.1 [61] is part of the official Deep
Learning with PyTorch Step-by-Step repository on GitHub. You can also run it
directly in Google Colab [62] .
If you’re using a local installation, open your terminal or Anaconda prompt and
navigate to the PyTorchStepByStep folder you cloned from GitHub. Then, activate
the pytorchbook environment and run jupyter notebook:
$ conda activate pytorchbook
(pytorchbook)$ jupyter notebook
If you’re using Jupyter’s default settings, this link should open Chapter 2.1’s
notebook. If not, just click on Chapter02.1.ipynb on your Jupyter’s home page.
Imports
For the sake of organization, all libraries needed throughout the code used in any
given chapter are imported at its very beginning. For this chapter, we’ll need the
following imports:
Spoilers | 175