Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub
If you want to learn more about both curves, you can checkScikit-Learn’s documentation for "Receiver OperatingCharacteristic (ROC)" [72] and "Precision-Recall" [73] . Another goodresource is Jason Brownlee’s Machine Learning Mastery blog:"How to Use ROC Curves and Precision-Recall Curves forClassification in Python" [74] and "ROC Curves and Precision-Recall Curves for Imbalanced Classification" [75] .Putting It All TogetherIn this chapter, we haven’t modified the training pipeline much. The datapreparation part is roughly the same as in the previous chapter, except for the factthat we performed the split using Scikit-Learn this time. The model configurationpart is largely the same as well, but we changed the loss function, so it is theappropriate one for a classification problem. The model training part is quitestraightforward given the development of the StepByStep class in the last chapter.But now, after training a model, we can use our class' predict() method to getpredictions for our validation set and use Scikit-Learn’s metrics module tocompute a wide range of classification metrics, like the confusion matrix, forexample.Putting It All Together | 259
Data Preparation1 torch.manual_seed(13)23 # Builds tensors from Numpy arrays4 x_train_tensor = torch.as_tensor(X_train).float()5 y_train_tensor = torch.as_tensor(y_train.reshape(-1, 1)).float()67 x_val_tensor = torch.as_tensor(X_val).float()8 y_val_tensor = torch.as_tensor(y_val.reshape(-1, 1)).float()910 # Builds dataset containing ALL data points11 train_dataset = TensorDataset(x_train_tensor, y_train_tensor)12 val_dataset = TensorDataset(x_val_tensor, y_val_tensor)1314 # Builds a loader of each set15 train_loader = DataLoader(16 dataset=train_dataset,17 batch_size=16,18 shuffle=True19 )20 val_loader = DataLoader(dataset=val_dataset, batch_size=16)Model Configuration1 # Sets learning rate - this is "eta" ~ the "n"-like Greek letter2 lr = 0.134 torch.manual_seed(42)5 model = nn.Sequential()6 model.add_module('linear', nn.Linear(2, 1))78 # Defines an SGD optimizer to update the parameters9 optimizer = optim.SGD(model.parameters(), lr=lr)1011 # Defines a BCE loss function12 loss_fn = nn.BCEWithLogitsLoss()260 | Chapter 3: A Simple Classification Problem
- Page 234 and 235: Next, we’ll standardize the featu
- Page 236 and 237: Equation 3.1 - A linear regression
- Page 238 and 239: The odds ratio is given by the rati
- Page 240 and 241: As expected, probabilities that add
- Page 242 and 243: Sigmoid Functiondef sigmoid(z):retu
- Page 244 and 245: A picture is worth a thousand words
- Page 246 and 247: OutputOrderedDict([('linear.weight'
- Page 248 and 249: The first summation adds up the err
- Page 250 and 251: IMPORTANT: Make sure to pass the pr
- Page 252 and 253: To make it clear: In this chapter,
- Page 254 and 255: argument of nn.BCEWithLogitsLoss().
- Page 256 and 257: It is not that hard, to be honest.
- Page 258 and 259: Figure 3.6 - Training and validatio
- Page 260 and 261: Outputarray([[0.5504593 ],[0.949995
- Page 262 and 263: decision boundary.Look at the expre
- Page 264 and 265: Are my data points separable?That
- Page 266 and 267: model = nn.Sequential()model.add_mo
- Page 268 and 269: It looks like this:Figure 3.10 - Sp
- Page 270 and 271: True and False Positives and Negati
- Page 272 and 273: tpr_fpr(cm_thresh50)Output(0.909090
- Page 274 and 275: The trade-off between precision and
- Page 276 and 277: Figure 3.13 - Using a low threshold
- Page 278 and 279: Figure 3.16 - Trade-offs for two di
- Page 280 and 281: thresholds do not necessarily inclu
- Page 282 and 283: actual data, it is as bad as it can
- Page 286 and 287: Model Training1 n_epochs = 10023 sb
- Page 288 and 289: step in your journey! What’s next
- Page 290 and 291: Chapter 4Classifying ImagesSpoilers
- Page 292 and 293: Data GenerationOur images are quite
- Page 294 and 295: Images and ChannelsIn case you’re
- Page 296 and 297: image_rgb = np.stack([image_r, imag
- Page 298 and 299: That’s fairly straightforward; we
- Page 300 and 301: • Transformations based on Tensor
- Page 302 and 303: position of an object in a picture
- Page 304 and 305: Outputtensor([[[0., 0., 0., 1., 0.]
- Page 306 and 307: Outputtensor([[[-1., -1., -1., 1.,
- Page 308 and 309: We can convert the former into the
- Page 310 and 311: composer = Compose([RandomHorizonta
- Page 312 and 313: Output<torch.utils.data.dataset.Sub
- Page 314 and 315: train_composer = Compose([RandomHor
- Page 316 and 317: The minority class should have the
- Page 318 and 319: train_loader = DataLoader(dataset=t
- Page 320 and 321: implemented in Chapter 2.1? Let’s
- Page 322 and 323: Let’s take one mini-batch of imag
- Page 324 and 325: What does our model look like? Visu
- Page 326 and 327: Model TrainingLet’s train our mod
- Page 328 and 329: preceding hidden layer to compute i
- Page 330 and 331: fig = sbs_nn.plot_losses()Figure 4.
- Page 332 and 333: Equation 4.2 - Equivalence of deep
Data Preparation
1 torch.manual_seed(13)
2
3 # Builds tensors from Numpy arrays
4 x_train_tensor = torch.as_tensor(X_train).float()
5 y_train_tensor = torch.as_tensor(y_train.reshape(-1, 1)).float()
6
7 x_val_tensor = torch.as_tensor(X_val).float()
8 y_val_tensor = torch.as_tensor(y_val.reshape(-1, 1)).float()
9
10 # Builds dataset containing ALL data points
11 train_dataset = TensorDataset(x_train_tensor, y_train_tensor)
12 val_dataset = TensorDataset(x_val_tensor, y_val_tensor)
13
14 # Builds a loader of each set
15 train_loader = DataLoader(
16 dataset=train_dataset,
17 batch_size=16,
18 shuffle=True
19 )
20 val_loader = DataLoader(dataset=val_dataset, batch_size=16)
Model Configuration
1 # Sets learning rate - this is "eta" ~ the "n"-like Greek letter
2 lr = 0.1
3
4 torch.manual_seed(42)
5 model = nn.Sequential()
6 model.add_module('linear', nn.Linear(2, 1))
7
8 # Defines an SGD optimizer to update the parameters
9 optimizer = optim.SGD(model.parameters(), lr=lr)
10
11 # Defines a BCE loss function
12 loss_fn = nn.BCEWithLogitsLoss()
260 | Chapter 3: A Simple Classification Problem