22.02.2024 Views

Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

The logits (z), as shown in Figure 4.4, are given by the following expression:

Model Configuration

As usual, we only need to define a model, an appropriate loss function, and an

optimizer. Since we have five-by-five single-channel images as inputs now, we

need to flatten them first so they can be proper inputs to our linear layer (without

bias). We will keep using the SGD optimizer with a learning rate of 0.1 for now.

This is what the model configuration looks like for our classification problem:

Model Configuration

1 # Sets learning rate - this is "eta" ~ the "n"-like Greek letter

2 lr = 0.1

3

4 torch.manual_seed(17)

5 # Now we can create a model

6 model_logistic = nn.Sequential()

7 model_logistic.add_module('flatten', nn.Flatten())

8 model_logistic.add_module('output', nn.Linear(25, 1, bias=False))

9 model_logistic.add_module('sigmoid', nn.Sigmoid())

10

11 # Defines an SGD optimizer to update the parameters

12 optimizer_logistic = optim.SGD(

13 model_logistic.parameters(), lr=lr

14 )

15 # Defines a binary cross-entropy loss function

16 binary_loss_fn = nn.BCELoss()

300 | Chapter 4: Classifying Images

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!