Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

peiying410632
from peiying410632 More from this publisher
22.02.2024 Views

Output(tensor([0.], device='cuda:0'),tensor([0.], device='cuda:0'))What does the underscore (_) at the end of the method’s namemean? Do you remember? If not, go back to the previous sectionand find out.So, let’s ditch the manual computation of gradients and use both the backward()and zero_() methods instead.That’s it? Well, pretty much … but there is always a catch, and this time it has to dowith the update of the parameters.Updating Parameters"One does not simply update parameters…"BoromirUnfortunately, our Numpy's code for updating parameters is not enough. Whynot?! Let’s try it out, simply copying and pasting it (this is the first attempt), changingit slightly (second attempt), and then asking PyTorch to back off (yes, it is PyTorch’sfault!).Notebook Cell 1.6 - Updating parameters1 # Sets learning rate - this is "eta" ~ the "n"-like Greek letter2 lr = 0.134 # Step 0 - Initializes parameters "b" and "w" randomly5 torch.manual_seed(42)6 b = torch.randn(1, requires_grad=True, \7 dtype=torch.float, device=device)8 w = torch.randn(1, requires_grad=True, \9 dtype=torch.float, device=device)1011 # Defines number of epochs12 n_epochs = 100013Autograd | 89

14 for epoch in range(n_epochs):15 # Step 1 - Computes model's predicted output - forward pass16 yhat = b + w * x_train_tensor1718 # Step 2 - Computes the loss19 # We are using ALL data points, so this is BATCH gradient20 # descent. How wrong is our model? That's the error!21 error = (yhat - y_train_tensor)22 # It is a regression, so it computes mean squared error (MSE)23 loss = (error ** 2).mean()2425 # Step 3 - Computes gradients for both "b" and "w"26 # parameters. No more manual computation of gradients!27 # b_grad = 2 * error.mean()28 # w_grad = 2 * (x_tensor * error).mean()29 # We just tell PyTorch to work its way BACKWARDS30 # from the specified loss!31 loss.backward()3233 # Step 4 - Updates parameters using gradients and34 # the learning rate. But not so fast...35 # FIRST ATTEMPT - just using the same code as before36 # AttributeError: 'NoneType' object has no attribute 'zero_'37 # b = b - lr * b.grad 138 # w = w - lr * w.grad 139 # print(b) 14041 # SECOND ATTEMPT - using in-place Python assignment42 # RuntimeError: a leaf Variable that requires grad43 # has been used in an in-place operation.44 # b -= lr * b.grad 245 # w -= lr * w.grad 24647 # THIRD ATTEMPT - NO_GRAD for the win!48 # We need to use NO_GRAD to keep the update out of49 # the gradient computation. Why is that? It boils50 # down to the DYNAMIC GRAPH that PyTorch uses...51 with torch.no_grad(): 352 b -= lr * b.grad 353 w -= lr * w.grad 35455 # PyTorch is "clingy" to its computed gradients; we90 | Chapter 1: A Simple Regression Problem

Output

(tensor([0.], device='cuda:0'),

tensor([0.], device='cuda:0'))

What does the underscore (_) at the end of the method’s name

mean? Do you remember? If not, go back to the previous section

and find out.

So, let’s ditch the manual computation of gradients and use both the backward()

and zero_() methods instead.

That’s it? Well, pretty much … but there is always a catch, and this time it has to do

with the update of the parameters.

Updating Parameters

"One does not simply update parameters…"

Boromir

Unfortunately, our Numpy's code for updating parameters is not enough. Why

not?! Let’s try it out, simply copying and pasting it (this is the first attempt), changing

it slightly (second attempt), and then asking PyTorch to back off (yes, it is PyTorch’s

fault!).

Notebook Cell 1.6 - Updating parameters

1 # Sets learning rate - this is "eta" ~ the "n"-like Greek letter

2 lr = 0.1

3

4 # Step 0 - Initializes parameters "b" and "w" randomly

5 torch.manual_seed(42)

6 b = torch.randn(1, requires_grad=True, \

7 dtype=torch.float, device=device)

8 w = torch.randn(1, requires_grad=True, \

9 dtype=torch.float, device=device)

10

11 # Defines number of epochs

12 n_epochs = 1000

13

Autograd | 89

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!