Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

peiying410632
from peiying410632 More from this publisher
22.02.2024 Views

dummy_array = np.array([1, 2, 3])dummy_tensor = torch.as_tensor(dummy_array)# Modifies the numpy arraydummy_array[1] = 0# Tensor gets modified too...dummy_tensorOutputtensor([1, 0, 3])"What do I need as_tensor() for? Why can’t I just usetorch.tensor()?"Well, you could … just keep in mind that torch.tensor() always makes a copy ofthe data, instead of sharing the underlying data with the Numpy array.You can also perform the opposite operation, namely, transforming a PyTorchtensor back to a Numpy array. That’s what numpy() is good for:dummy_tensor.numpy()Outputarray([1, 0, 3])So far, we have only created CPU tensors. What does it mean? It means the data inthe tensor is stored in the computer’s main memory and any operations performedon it are going to be handled by its CPU (the central processing unit; for instance,an Intel® Core i7 Processor). So, although the data is, technically speaking, in thememory, we’re still calling this kind of tensor a CPU tensor."Is there any other kind of tensor?"Yes, there is also a GPU tensor. A GPU (which stands for graphics processing unit)is the processor of a graphics card. These tensors store their data in the graphicscard’s memory, and operations on top of them are performed by the GPU. ForPyTorch | 77

more information on the differences between CPUs and GPUs, please refer to thislink [44] .If you have a graphics card from NVIDIA, you can use the power of its GPU tospeed up model training. PyTorch supports the use of these GPUs for modeltraining using CUDA (Compute Unified Device Architecture), which needs to bepreviously installed and configured (please refer to the "Setup Guide" for moreinformation on this).If you do have a GPU (and you managed to install CUDA), we’re getting to the partwhere you get to use it with PyTorch. But, even if you do not have a GPU, youshould stick around in this section anyway. Why? First, you can use a free GPUfrom Google Colab, and, second, you should always make your code GPU-ready;that is, it should automatically run in a GPU, if one is available."How do I know if a GPU is available?"PyTorch has your back once more—you can use cuda.is_available() to find out ifyou have a GPU at your disposal and set your device accordingly. So, it is goodpractice to figure this out at the top of your code:Defining Your Devicedevice = 'cuda' if torch.cuda.is_available() else 'cpu'So, if you don’t have a GPU, your device is called cpu. If you do have a GPU, yourdevice is called cuda or cuda:0. Why isn’t it called gpu, then? Don’t ask me… Theimportant thing is, your code will be able to always use the appropriate device."Why cuda:0? Are there others, like cuda:1, cuda:2 and so on?"There may be if you are lucky enough to have multiple GPUs in your computer. Sincethis is usually not the case, I am assuming you have either one GPU or none. So,when we tell PyTorch to send a tensor to cuda without any numbering, it will send itto the current CUDA device, which is device #0 by default.If you are using someone else’s computer and you don’t know how many GPUs ithas, or which model they are, you can figure it out using cuda.device_count() andcuda.get_device_name():78 | Chapter 1: A Simple Regression Problem

dummy_array = np.array([1, 2, 3])

dummy_tensor = torch.as_tensor(dummy_array)

# Modifies the numpy array

dummy_array[1] = 0

# Tensor gets modified too...

dummy_tensor

Output

tensor([1, 0, 3])

"What do I need as_tensor() for? Why can’t I just use

torch.tensor()?"

Well, you could … just keep in mind that torch.tensor() always makes a copy of

the data, instead of sharing the underlying data with the Numpy array.

You can also perform the opposite operation, namely, transforming a PyTorch

tensor back to a Numpy array. That’s what numpy() is good for:

dummy_tensor.numpy()

Output

array([1, 0, 3])

So far, we have only created CPU tensors. What does it mean? It means the data in

the tensor is stored in the computer’s main memory and any operations performed

on it are going to be handled by its CPU (the central processing unit; for instance,

an Intel® Core i7 Processor). So, although the data is, technically speaking, in the

memory, we’re still calling this kind of tensor a CPU tensor.

"Is there any other kind of tensor?"

Yes, there is also a GPU tensor. A GPU (which stands for graphics processing unit)

is the processor of a graphics card. These tensors store their data in the graphics

card’s memory, and operations on top of them are performed by the GPU. For

PyTorch | 77

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!