Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

peiying410632
from peiying410632 More from this publisher
22.02.2024 Views

"What do we need this for?"It turns out, state dictionaries can also be used for checkpointing a model, as we willsee in Chapter 2.DeviceIMPORTANT: We need to send our model to the same devicewhere the data is. If our data is made of GPU tensors, our modelmust "live" inside the GPU as well.If we were to send our dummy model to a device, it would look like this:torch.manual_seed(42)# Creates a "dummy" instance of our ManualLinearRegression model# and sends it to the devicedummy = ManualLinearRegression().to(device)Forward PassThe forward pass is the moment when the model makes predictions.Remember: You should make predictions calling model(x).DO NOT call model.forward(x)!Otherwise, your model’s hooks will not work (if you have them).We can use all these handy methods to change our code, which should be lookinglike this:Model | 107

Notebook Cell 1.10 - PyTorch’s model in action: no more manual prediction / forward step!1 # Sets learning rate - this is "eta" ~ the "n"-like2 # Greek letter3 lr = 0.145 # Step 0 - Initializes parameters "b" and "w" randomly6 torch.manual_seed(42)7 # Now we can create a model and send it at once to the device8 model = ManualLinearRegression().to(device) 1910 # Defines an SGD optimizer to update the parameters11 # (now retrieved directly from the model)12 optimizer = optim.SGD(model.parameters(), lr=lr)1314 # Defines an MSE loss function15 loss_fn = nn.MSELoss(reduction='mean')1617 # Defines number of epochs18 n_epochs = 10001920 for epoch in range(n_epochs):21 model.train() # What is this?!? 22223 # Step 1 - Computes model's predicted output - forward pass24 # No more manual prediction!25 yhat = model(x_train_tensor) 32627 # Step 2 - Computes the loss28 loss = loss_fn(yhat, y_train_tensor)2930 # Step 3 - Computes gradients for both "b" and "w" parameters31 loss.backward()3233 # Step 4 - Updates parameters using gradients and34 # the learning rate35 optimizer.step()36 optimizer.zero_grad()3738 # We can also inspect its parameters using its state_dict39 print(model.state_dict())108 | Chapter 1: A Simple Regression Problem

"What do we need this for?"

It turns out, state dictionaries can also be used for checkpointing a model, as we will

see in Chapter 2.

Device

IMPORTANT: We need to send our model to the same device

where the data is. If our data is made of GPU tensors, our model

must "live" inside the GPU as well.

If we were to send our dummy model to a device, it would look like this:

torch.manual_seed(42)

# Creates a "dummy" instance of our ManualLinearRegression model

# and sends it to the device

dummy = ManualLinearRegression().to(device)

Forward Pass

The forward pass is the moment when the model makes predictions.

Remember: You should make predictions calling model(x).

DO NOT call model.forward(x)!

Otherwise, your model’s hooks will not work (if you have them).

We can use all these handy methods to change our code, which should be looking

like this:

Model | 107

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!