Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub
Helper Function #41 def index_splitter(n, splits, seed=13):2 idx = torch.arange(n)3 # Makes the split argument a tensor4 splits_tensor = torch.as_tensor(splits)5 # Finds the correct multiplier, so we don't have6 # to worry about summing up to N (or one)7 multiplier = n / splits_tensor.sum()8 splits_tensor = (multiplier * splits_tensor).long()9 # If there is a difference, throws at the first split10 # so random_split does not complain11 diff = n - splits_tensor.sum()12 splits_tensor[0] += diff13 # Uses PyTorch random_split to split the indices14 torch.manual_seed(seed)15 return random_split(idx, splits_tensor)Helper Function #51 def make_balanced_sampler(y):2 # Computes weights for compensating imbalanced classes3 classes, counts = y.unique(return_counts=True)4 weights = 1.0 / counts.float()5 sample_weights = weights[y.squeeze().long()]6 # Builds sampler with compute weights7 generator = torch.Generator()8 sampler = WeightedRandomSampler(9 weights=sample_weights,10 num_samples=len(sample_weights),11 generator=generator,12 replacement=True13 )14 return samplerPutting It All Together | 327
Data Preparation1 # Builds tensors from numpy arrays BEFORE split2 # Modifies the scale of pixel values from [0, 255] to [0, 1]3 x_tensor = torch.as_tensor(images / 255).float()4 y_tensor = torch.as_tensor(labels.reshape(-1, 1)).float()56 # Uses index_splitter to generate indices for training and7 # validation sets8 train_idx, val_idx = index_splitter(len(x_tensor), [80, 20])9 # Uses indices to perform the split10 x_train_tensor = x_tensor[train_idx]11 y_train_tensor = y_tensor[train_idx]12 x_val_tensor = x_tensor[val_idx]13 y_val_tensor = y_tensor[val_idx]1415 # Builds different composers because of data augmentation ontraining set16 train_composer = Compose([RandomHorizontalFlip(p=.5),17 Normalize(mean=(.5,), std=(.5,))])18 val_composer = Compose([Normalize(mean=(.5,), std=(.5,))])19 # Uses custom dataset to apply composed transforms to each set20 train_dataset = TransformedTensorDataset(21 x_train_tensor, y_train_tensor, transform=train_composer22 )23 val_dataset = TransformedTensorDataset(24 x_val_tensor, y_val_tensor, transform=val_composer25 )2627 # Builds a weighted random sampler to handle imbalanced classes28 sampler = make_balanced_sampler(y_train_tensor)2930 # Uses sampler in the training set to get a balanced data loader31 train_loader = DataLoader(32 dataset=train_dataset, batch_size=16, sampler=sampler33 )34 val_loader = DataLoader(dataset=val_dataset, batch_size=16)328 | Chapter 4: Classifying Images
- Page 302 and 303: position of an object in a picture
- Page 304 and 305: Outputtensor([[[0., 0., 0., 1., 0.]
- Page 306 and 307: Outputtensor([[[-1., -1., -1., 1.,
- Page 308 and 309: We can convert the former into the
- Page 310 and 311: composer = Compose([RandomHorizonta
- Page 312 and 313: Output<torch.utils.data.dataset.Sub
- Page 314 and 315: train_composer = Compose([RandomHor
- Page 316 and 317: The minority class should have the
- Page 318 and 319: train_loader = DataLoader(dataset=t
- Page 320 and 321: implemented in Chapter 2.1? Let’s
- Page 322 and 323: Let’s take one mini-batch of imag
- Page 324 and 325: What does our model look like? Visu
- Page 326 and 327: Model TrainingLet’s train our mod
- Page 328 and 329: preceding hidden layer to compute i
- Page 330 and 331: fig = sbs_nn.plot_losses()Figure 4.
- Page 332 and 333: Equation 4.2 - Equivalence of deep
- Page 334 and 335: w_nn_equiv = w_nn_output.mm(w_nn_hi
- Page 336 and 337: Weights as PixelsDuring data prepar
- Page 338 and 339: is only 0.25 (for z = 0) and that i
- Page 340 and 341: nn.Tanh()(dummy_z)Outputtensor([-0.
- Page 342 and 343: dummy_z = torch.tensor([-3., 0., 3.
- Page 344 and 345: As you can see, in PyTorch the coef
- Page 346 and 347: Figure 4.16 - Deep model (for real)
- Page 348 and 349: Figure 4.18 - Losses (before and af
- Page 350 and 351: Equation 4.3 - Activation functions
- Page 354 and 355: Model Configuration1 # Sets learnin
- Page 356 and 357: Bonus ChapterFeature SpaceThis chap
- Page 358 and 359: Affine TransformationsAn affine tra
- Page 360 and 361: Figure B.3 - Annotated model diagra
- Page 362 and 363: Figure B.5 - In the beginning…But
- Page 364 and 365: OK, now we can clearly see a differ
- Page 366 and 367: In the model above, the sigmoid fun
- Page 368 and 369: the more dimensions, the more separ
- Page 370 and 371: import randomimport numpy as npfrom
- Page 372 and 373: identity = np.array([[[[0, 0, 0],[0
- Page 374 and 375: Figure 5.4 - Striding the image, on
- Page 376 and 377: Output-----------------------------
- Page 378 and 379: Outputtensor([[[[9., 5., 0., 7.],[0
- Page 380 and 381: OutputParameter containing:tensor([
- Page 382 and 383: Moreover, notice that if we were to
- Page 384 and 385: In code, as usual, PyTorch gives us
- Page 386 and 387: Outputtensor([[[[5., 5., 0., 8., 7.
- Page 388 and 389: edge = np.array([[[[0, 1, 0],[1, -4
- Page 390 and 391: A pooling kernel of two-by-two resu
- Page 392 and 393: Outputtensor([[22., 23., 11., 24.,
- Page 394 and 395: Figure 5.15 - LeNet-5 architectureS
- Page 396 and 397: • second block: produces 16-chann
- Page 398 and 399: Transformed Dataset1 class Transfor
- Page 400 and 401: LossNew problem, new loss. Since we
Helper Function #4
1 def index_splitter(n, splits, seed=13):
2 idx = torch.arange(n)
3 # Makes the split argument a tensor
4 splits_tensor = torch.as_tensor(splits)
5 # Finds the correct multiplier, so we don't have
6 # to worry about summing up to N (or one)
7 multiplier = n / splits_tensor.sum()
8 splits_tensor = (multiplier * splits_tensor).long()
9 # If there is a difference, throws at the first split
10 # so random_split does not complain
11 diff = n - splits_tensor.sum()
12 splits_tensor[0] += diff
13 # Uses PyTorch random_split to split the indices
14 torch.manual_seed(seed)
15 return random_split(idx, splits_tensor)
Helper Function #5
1 def make_balanced_sampler(y):
2 # Computes weights for compensating imbalanced classes
3 classes, counts = y.unique(return_counts=True)
4 weights = 1.0 / counts.float()
5 sample_weights = weights[y.squeeze().long()]
6 # Builds sampler with compute weights
7 generator = torch.Generator()
8 sampler = WeightedRandomSampler(
9 weights=sample_weights,
10 num_samples=len(sample_weights),
11 generator=generator,
12 replacement=True
13 )
14 return sampler
Putting It All Together | 327