Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

peiying410632
from peiying410632 More from this publisher
22.02.2024 Views

See? We effectively modified the underlying Dog class and all its instances atonce! It looks very cool, sure. And it can wreak havoc too!Instead of creating an attribute or method directly in the class, as we’ve been doingso far, it is possible to use setattr to create them dynamically. In our StepByStepclass, the last two lines of code created two methods in the class, each having thesame name of the function used to create the method.OK, but there are still some parts missing in order to perform model training. Let’skeep adding more methods.Training MethodsThe next method we need to add corresponds to the Helper Function #2 inChapter 2: the mini-batch loop. We need to change it a bit, though; there, both thedata loader and the step function were arguments. This is not the case anymoresince we have both of them as attributes: self.train_loader andself.train_step_fn, for training; self.val_loader and self.val_step_fn, forvalidation. The only thing this method needs to know is if it is handling training orvalidation data.Going Classy | 187

The code should look like this:Mini-Batch1 def _mini_batch(self, validation=False):2 # The mini-batch can be used with both loaders3 # The argument `validation` defines which loader and4 # corresponding step function are going to be used5 if validation:6 data_loader = self.val_loader7 step_fn = self.val_step_fn8 else:9 data_loader = self.train_loader10 step_fn = self.train_step_fn1112 if data_loader is None:13 return None1415 # Once the data loader and step function are set, this is the16 # same mini-batch loop we had before17 mini_batch_losses = []18 for x_batch, y_batch in data_loader:19 x_batch = x_batch.to(self.device)20 y_batch = y_batch.to(self.device)2122 mini_batch_loss = step_fn(x_batch, y_batch)23 mini_batch_losses.append(mini_batch_loss)2425 loss = np.mean(mini_batch_losses)2627 return loss2829 setattr(StepByStep, '_mini_batch', _mini_batch)Moreover, if the user decides not to provide a validation loader, it will retain itsinitial None value from the constructor method. If that’s the case, we don’t have acorresponding loss to compute, and it returns None instead (line 13 in the snippetabove).What’s left to do? The training loop, of course! This is similar to our Model TrainingV5 in Chapter 2, but we can make it more flexible, taking the number of epochs and188 | Chapter 2.1: Going Classy

The code should look like this:

Mini-Batch

1 def _mini_batch(self, validation=False):

2 # The mini-batch can be used with both loaders

3 # The argument `validation` defines which loader and

4 # corresponding step function are going to be used

5 if validation:

6 data_loader = self.val_loader

7 step_fn = self.val_step_fn

8 else:

9 data_loader = self.train_loader

10 step_fn = self.train_step_fn

11

12 if data_loader is None:

13 return None

14

15 # Once the data loader and step function are set, this is the

16 # same mini-batch loop we had before

17 mini_batch_losses = []

18 for x_batch, y_batch in data_loader:

19 x_batch = x_batch.to(self.device)

20 y_batch = y_batch.to(self.device)

21

22 mini_batch_loss = step_fn(x_batch, y_batch)

23 mini_batch_losses.append(mini_batch_loss)

24

25 loss = np.mean(mini_batch_losses)

26

27 return loss

28

29 setattr(StepByStep, '_mini_batch', _mini_batch)

Moreover, if the user decides not to provide a validation loader, it will retain its

initial None value from the constructor method. If that’s the case, we don’t have a

corresponding loss to compute, and it returns None instead (line 13 in the snippet

above).

What’s left to do? The training loop, of course! This is similar to our Model Training

V5 in Chapter 2, but we can make it more flexible, taking the number of epochs and

188 | Chapter 2.1: Going Classy

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!