Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

peiying410632
from peiying410632 More from this publisher
22.02.2024 Views

There are MANY different layers that can be used in PyTorch:• Convolution Layers• Pooling Layers• Padding Layers• Non-linear Activations• Normalization Layers• Recurrent Layers• Transformer Layers• Linear Layers• Dropout Layers• Sparse Layers (embeddings)• Vision Layers• DataParallel Layers (multi-GPU)• Flatten LayerSo far, we have just used a Linear layer. In the chapters ahead, we’ll use manyothers, like convolution, pooling, padding, flatten, dropout, and non-linearactivations.Putting It All TogetherWe’ve covered a lot of ground so far, from coding a linear regression in Numpyusing gradient descent to transforming it into a PyTorch model, step-by-step.It is time to put it all together and organize our code into three fundamental parts,namely:• data preparation (not data generation!)• model configuration• model trainingLet’s tackle these three parts, in order.Putting It All Together | 115

Data PreparationThere hasn’t been much data preparation up to this point, to be honest. Aftergenerating our data points in Notebook Cell 1.1, the only preparation stepperformed so far has been transforming Numpy arrays into PyTorch tensors, as inNotebook Cell 1.3, which is reproduced below:Define - Data Preparation V01 %%writefile data_preparation/v0.py23 device = 'cuda' if torch.cuda.is_available() else 'cpu'45 # Our data was in Numpy arrays, but we need to transform them6 # into PyTorch's Tensors and then send them to the7 # chosen device8 x_train_tensor = torch.as_tensor(x_train).float().to(device)9 y_train_tensor = torch.as_tensor(y_train).float().to(device)Run - Data Preparation V0%run -i data_preparation/v0.pyThis part will get much more interesting in the next chapter when we get to useDataset and DataLoader classes :-)"What’s the purpose of saving cells to these files?"We know we have to run the full sequence to train a model: data preparation,model configuration, and model training. In Chapter 2, we’ll gradually improve eachof these parts, versioning them inside each corresponding folder. So, saving them tofiles allows us to run a full sequence using different versions without having toduplicate code.Let’s say we start improving model configuration (and we will do exactly that inChapter 2), but the other two parts are still the same; how do we run the fullsequence?116 | Chapter 1: A Simple Regression Problem

There are MANY different layers that can be used in PyTorch:

• Convolution Layers

• Pooling Layers

• Padding Layers

• Non-linear Activations

• Normalization Layers

• Recurrent Layers

• Transformer Layers

• Linear Layers

• Dropout Layers

• Sparse Layers (embeddings)

• Vision Layers

• DataParallel Layers (multi-GPU)

• Flatten Layer

So far, we have just used a Linear layer. In the chapters ahead, we’ll use many

others, like convolution, pooling, padding, flatten, dropout, and non-linear

activations.

Putting It All Together

We’ve covered a lot of ground so far, from coding a linear regression in Numpy

using gradient descent to transforming it into a PyTorch model, step-by-step.

It is time to put it all together and organize our code into three fundamental parts,

namely:

• data preparation (not data generation!)

• model configuration

• model training

Let’s tackle these three parts, in order.

Putting It All Together | 115

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!