22.02.2024 Views

Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

41 optimizer.zero_grad() 3

42

43 print(b, w)

1 Defining an optimizer

2 New "Step 4 - Updating Parameters" using the optimizer

3 New "gradient zeroing" using the optimizer

Let’s inspect our two parameters just to make sure everything is still working fine:

Output

tensor([1.0235], device='cuda:0', requires_grad=True)

tensor([1.9690], device='cuda:0', requires_grad=True)

Cool! We’ve optimized the optimization process :-) What’s left?

Loss

We now tackle the loss computation. As expected, PyTorch has us covered once

again. There are many loss functions to choose from, depending on the task at

hand. Since ours is a regression, we are using the mean squared error (MSE) as loss,

and thus we need PyTorch’s nn.MSELoss():

# Defines an MSE loss function

loss_fn = nn.MSELoss(reduction='mean')

loss_fn

Output

MSELoss()

Notice that nn.MSELoss() is NOT the loss function itself: We do not pass

predictions and labels to it! Instead, as you can see, it returns another function,

which we called loss_fn: That is the actual loss function. So, we can pass a

prediction and a label to it and get the corresponding loss value:

Loss | 99

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!