Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

peiying410632
from peiying410632 More from this publisher
22.02.2024 Views

Outputtensor([[1., 2., 1.],[1., 1., 1.]])tensor([[1., 3., 1., 1., 1., 1.]])OutputUserWarning: To copy construct from a tensor, it isrecommended to use sourceTensor.clone().detach() orsourceTensor.clone().detach().requires_grad_(True),rather than tensor.new_tensor(sourceTensor)."""Entry point for launching an IPython kernel.It seems that PyTorch prefers that we use clone()—together withdetach()—instead of new_tensor(). Both ways accomplish exactly the sameresult, but the code below is deemed cleaner and more readable.# Let's follow PyTorch's suggestion and use "clone" methodanother_matrix = matrix.view(1, 6).clone().detach()# Again, if we change one of its elements...another_matrix[0, 1] = 4.# The original tensor (matrix) is left untouched!print(matrix)print(another_matrix)Outputtensor([[1., 2., 1.],[1., 1., 1.]])tensor([[1., 4., 1., 1., 1., 1.]])You’re probably asking yourself: "But, what about the detach()method—what does it do?"It removes the tensor from the computation graph, which probably raises morequestions than it answers, right? Don’t worry, we’ll get back to it later in thischapter.PyTorch | 75

Loading Data, Devices, and CUDAIt is time to start converting our Numpy code to PyTorch: We’ll start with thetraining data; that is, our x_train and y_train arrays."How do we go from Numpy’s arrays to PyTorch’s tensors?"That’s what as_tensor() is good for (which works like from_numpy()).This operation preserves the type of the array:x_train_tensor = torch.as_tensor(x_train)x_train.dtype, x_train_tensor.dtypeOutput(dtype('float64'), torch.float64)You can also easily cast it to a different type, like a lower-precision (32-bit) float,which will occupy less space in memory, using float():float_tensor = x_train_tensor.float()float_tensor.dtypeOutputtorch.float32IMPORTANT: Both as_tensor() and from_numpy() return atensor that shares the underlying data with the original Numpyarray. Similar to what happened when we used view() in the lastsection, if you modify the original Numpy array, you’re modifyingthe corresponding PyTorch tensor too, and vice-versa.76 | Chapter 1: A Simple Regression Problem

Loading Data, Devices, and CUDA

It is time to start converting our Numpy code to PyTorch: We’ll start with the

training data; that is, our x_train and y_train arrays.

"How do we go from Numpy’s arrays to PyTorch’s tensors?"

That’s what as_tensor() is good for (which works like from_numpy()).

This operation preserves the type of the array:

x_train_tensor = torch.as_tensor(x_train)

x_train.dtype, x_train_tensor.dtype

Output

(dtype('float64'), torch.float64)

You can also easily cast it to a different type, like a lower-precision (32-bit) float,

which will occupy less space in memory, using float():

float_tensor = x_train_tensor.float()

float_tensor.dtype

Output

torch.float32

IMPORTANT: Both as_tensor() and from_numpy() return a

tensor that shares the underlying data with the original Numpy

array. Similar to what happened when we used view() in the last

section, if you modify the original Numpy array, you’re modifying

the corresponding PyTorch tensor too, and vice-versa.

76 | Chapter 1: A Simple Regression Problem

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!