Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub
See? We effectively modified the underlying Dog class and all its instances atonce! It looks very cool, sure. And it can wreak havoc too!Instead of creating an attribute or method directly in the class, as we’ve been doingso far, it is possible to use setattr to create them dynamically. In our StepByStepclass, the last two lines of code created two methods in the class, each having thesame name of the function used to create the method.OK, but there are still some parts missing in order to perform model training. Let’skeep adding more methods.Training MethodsThe next method we need to add corresponds to the Helper Function #2 inChapter 2: the mini-batch loop. We need to change it a bit, though; there, both thedata loader and the step function were arguments. This is not the case anymoresince we have both of them as attributes: self.train_loader andself.train_step_fn, for training; self.val_loader and self.val_step_fn, forvalidation. The only thing this method needs to know is if it is handling training orvalidation data.Going Classy | 187
The code should look like this:Mini-Batch1 def _mini_batch(self, validation=False):2 # The mini-batch can be used with both loaders3 # The argument `validation` defines which loader and4 # corresponding step function are going to be used5 if validation:6 data_loader = self.val_loader7 step_fn = self.val_step_fn8 else:9 data_loader = self.train_loader10 step_fn = self.train_step_fn1112 if data_loader is None:13 return None1415 # Once the data loader and step function are set, this is the16 # same mini-batch loop we had before17 mini_batch_losses = []18 for x_batch, y_batch in data_loader:19 x_batch = x_batch.to(self.device)20 y_batch = y_batch.to(self.device)2122 mini_batch_loss = step_fn(x_batch, y_batch)23 mini_batch_losses.append(mini_batch_loss)2425 loss = np.mean(mini_batch_losses)2627 return loss2829 setattr(StepByStep, '_mini_batch', _mini_batch)Moreover, if the user decides not to provide a validation loader, it will retain itsinitial None value from the constructor method. If that’s the case, we don’t have acorresponding loss to compute, and it returns None instead (line 13 in the snippetabove).What’s left to do? The training loop, of course! This is similar to our Model TrainingV5 in Chapter 2, but we can make it more flexible, taking the number of epochs and188 | Chapter 2.1: Going Classy
- Page 162 and 163: but if we want to get serious about
- Page 164 and 165: How does this change our code so fa
- Page 166 and 167: Run - Model Training V2%run -i mode
- Page 168 and 169: piece of code that’s going to be
- Page 170 and 171: for it. We could do the same for th
- Page 172 and 173: EvaluationHow can we evaluate the m
- Page 174 and 175: And then, we update our model confi
- Page 176 and 177: Run - Model Training V4%run -i mode
- Page 178 and 179: Loading Extension# Load the TensorB
- Page 180 and 181: browser, you’ll likely see someth
- Page 182 and 183: model’s graph (not quite the same
- Page 184 and 185: Figure 2.5 - Scalars on TensorBoard
- Page 186 and 187: Define - Model Training V51 %%write
- Page 188 and 189: If, by any chance, you ended up wit
- Page 190 and 191: The procedure is exactly the same,
- Page 192 and 193: soon, so please bear with me for no
- Page 194 and 195: After recovering our model’s stat
- Page 196 and 197: Run - Model Configuration V31 # %lo
- Page 198 and 199: This is the general structure you
- Page 200 and 201: Chapter 2.1Going ClassySpoilersIn t
- Page 202 and 203: # A completely empty (and useless)
- Page 204 and 205: # These attributes are defined here
- Page 206 and 207: # Creates the train_step function f
- Page 208 and 209: # Builds function that performs a s
- Page 210 and 211: setattrThe setattr function sets th
- Page 214 and 215: the random seed as arguments.This s
- Page 216 and 217: The current state of development of
- Page 218 and 219: Lossesdef plot_losses(self):fig = p
- Page 220 and 221: Run - Data Preparation V21 # %load
- Page 222 and 223: Model TrainingWe start by instantia
- Page 224 and 225: Making PredictionsLet’s make up s
- Page 226 and 227: OutputOrderedDict([('0.weight', ten
- Page 228 and 229: Run - Data Preparation V21 # %load
- Page 230 and 231: • defining our StepByStep class
- Page 232 and 233: import numpy as npimport torchimpor
- Page 234 and 235: Next, we’ll standardize the featu
- Page 236 and 237: Equation 3.1 - A linear regression
- Page 238 and 239: The odds ratio is given by the rati
- Page 240 and 241: As expected, probabilities that add
- Page 242 and 243: Sigmoid Functiondef sigmoid(z):retu
- Page 244 and 245: A picture is worth a thousand words
- Page 246 and 247: OutputOrderedDict([('linear.weight'
- Page 248 and 249: The first summation adds up the err
- Page 250 and 251: IMPORTANT: Make sure to pass the pr
- Page 252 and 253: To make it clear: In this chapter,
- Page 254 and 255: argument of nn.BCEWithLogitsLoss().
- Page 256 and 257: It is not that hard, to be honest.
- Page 258 and 259: Figure 3.6 - Training and validatio
- Page 260 and 261: Outputarray([[0.5504593 ],[0.949995
The code should look like this:
Mini-Batch
1 def _mini_batch(self, validation=False):
2 # The mini-batch can be used with both loaders
3 # The argument `validation` defines which loader and
4 # corresponding step function are going to be used
5 if validation:
6 data_loader = self.val_loader
7 step_fn = self.val_step_fn
8 else:
9 data_loader = self.train_loader
10 step_fn = self.train_step_fn
11
12 if data_loader is None:
13 return None
14
15 # Once the data loader and step function are set, this is the
16 # same mini-batch loop we had before
17 mini_batch_losses = []
18 for x_batch, y_batch in data_loader:
19 x_batch = x_batch.to(self.device)
20 y_batch = y_batch.to(self.device)
21
22 mini_batch_loss = step_fn(x_batch, y_batch)
23 mini_batch_losses.append(mini_batch_loss)
24
25 loss = np.mean(mini_batch_losses)
26
27 return loss
28
29 setattr(StepByStep, '_mini_batch', _mini_batch)
Moreover, if the user decides not to provide a validation loader, it will retain its
initial None value from the constructor method. If that’s the case, we don’t have a
corresponding loss to compute, and it returns None instead (line 13 in the snippet
above).
What’s left to do? The training loop, of course! This is similar to our Model Training
V5 in Chapter 2, but we can make it more flexible, taking the number of epochs and
188 | Chapter 2.1: Going Classy