22.02.2024 Views

Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Notebook Cell 1.7 - PyTorch’s optimizer in action—no more manual update of parameters!

1 # Sets learning rate - this is "eta" ~ the "n"-like Greek letter

2 lr = 0.1

3

4 # Step 0 - Initializes parameters "b" and "w" randomly

5 torch.manual_seed(42)

6 b = torch.randn(1, requires_grad=True, \

7 dtype=torch.float, device=device)

8 w = torch.randn(1, requires_grad=True, \

9 dtype=torch.float, device=device)

10

11 # Defines a SGD optimizer to update the parameters

12 optimizer = optim.SGD([b, w], lr=lr) 1

13

14 # Defines number of epochs

15 n_epochs = 1000

16

17 for epoch in range(n_epochs):

18 # Step 1 - Computes model's predicted output - forward pass

19 yhat = b + w * x_train_tensor

20

21 # Step 2 - Computes the loss

22 # We are using ALL data points, so this is BATCH gradient

23 # descent. How wrong is our model? That's the error!

24 error = (yhat - y_train_tensor)

25 # It is a regression, so it computes mean squared error (MSE)

26 loss = (error ** 2).mean()

27

28 # Step 3 - Computes gradients for both "b" and "w" parameters

29 loss.backward()

30

31 # Step 4 - Updates parameters using gradients and

32 # the learning rate. No more manual update!

33 # with torch.no_grad():

34 # b -= lr * b.grad

35 # w -= lr * w.grad

36 optimizer.step() 2

37

38 # No more telling Pytorch to let gradients go!

39 # b.grad.zero_()

40 # w.grad.zero_()

98 | Chapter 1: A Simple Regression Problem

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!