Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

peiying410632
from peiying410632 More from this publisher
22.02.2024 Views

It is not that hard, to be honest. Remember the reduction argument? If we set it tosum, our loss function will only return the numerator of the equation above. Andthen we can divide it by the weighted counts ourselves:loss_fn_imb_sum = nn.BCEWithLogitsLoss(reduction='sum',pos_weight=pos_weight)loss = loss_fn_imb_sum(dummy_imb_logits, dummy_imb_labels)loss = loss / (pos_weight * n_pos + n_neg)lossOutputtensor([0.1643])There we go!Model ConfigurationIn Chapter 2.1, we ended up with a lean "Model Configuration" section: We onlyneed to define a model, an appropriate loss function, and an optimizer. Let’s definea model that produces logits and use nn.BCEWithLogitsLoss() as the loss function.Since we have two features, and we are producing logits instead of probabilities,our model will have one layer and one layer alone: Linear(2, 1). We will keepusing the SGD optimizer with a learning rate of 0.1 for now.Model Configuration | 231

This is what the model configuration looks like for our classification problem:Model Configuration1 # Sets learning rate - this is "eta" ~ the "n"-like Greek letter2 lr = 0.134 torch.manual_seed(42)5 model = nn.Sequential()6 model.add_module('linear', nn.Linear(2, 1))78 # Defines an SGD optimizer to update the parameters9 optimizer = optim.SGD(model.parameters(), lr=lr)1011 # Defines a BCE with logits loss function12 loss_fn = nn.BCEWithLogitsLoss()Model TrainingTime to train our model! We can leverage the StepByStep class we built in Chapter2.1 and use pretty much the same code as before:Model Training1 n_epochs = 10023 sbs = StepByStep(model, loss_fn, optimizer)4 sbs.set_loaders(train_loader, val_loader)5 sbs.train(n_epochs)fig = sbs.plot_losses()232 | Chapter 3: A Simple Classification Problem

This is what the model configuration looks like for our classification problem:

Model Configuration

1 # Sets learning rate - this is "eta" ~ the "n"-like Greek letter

2 lr = 0.1

3

4 torch.manual_seed(42)

5 model = nn.Sequential()

6 model.add_module('linear', nn.Linear(2, 1))

7

8 # Defines an SGD optimizer to update the parameters

9 optimizer = optim.SGD(model.parameters(), lr=lr)

10

11 # Defines a BCE with logits loss function

12 loss_fn = nn.BCEWithLogitsLoss()

Model Training

Time to train our model! We can leverage the StepByStep class we built in Chapter

2.1 and use pretty much the same code as before:

Model Training

1 n_epochs = 100

2

3 sbs = StepByStep(model, loss_fn, optimizer)

4 sbs.set_loaders(train_loader, val_loader)

5 sbs.train(n_epochs)

fig = sbs.plot_losses()

232 | Chapter 3: A Simple Classification Problem

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!