22.02.2024 Views

Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

back_to_numpy = x_train_tensor.numpy()

Output

TypeError: can't convert CUDA tensor to numpy. Use

Tensor.cpu() to copy the tensor to host memory first.

Unfortunately, Numpy cannot handle GPU tensors! You need to make them CPU

tensors first using cpu():

back_to_numpy = x_train_tensor.cpu().numpy()

So, to avoid this error, use first cpu() and then numpy(), even if you are using a CPU.

It follows the same principle of to(device): You can share your code with others

who may be using a GPU.

Creating Parameters

What distinguishes a tensor used for training data (or validation, or test)—like the

ones we’ve just created—from a tensor used as a (trainable) parameter / weight?

The latter requires the computation of its gradients, so we can update their values

(the parameters’ values, that is). That’s what the requires_grad=True argument is

good for. It tells PyTorch to compute gradients for us.

A tensor for a learnable parameter requires a gradient!

You may be tempted to create a simple tensor for a parameter and, later on, send it

to your chosen device, as we did with our data, right? Not so fast…

PyTorch | 81

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!