Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub
Transformed Dataset1 class TransformedTensorDataset(Dataset):2 def __init__(self, x, y, transform=None):3 self.x = x4 self.y = y5 self.transform = transform67 def __getitem__(self, index):8 x = self.x[index]910 if self.transform:11 x = self.transform(x)1213 return x, self.y[index]1415 def __len__(self):16 return len(self.x)A Multiclass Classification Problem | 373
Data Preparation1 # Builds tensors from numpy arrays BEFORE split2 # Modifies the scale of pixel values from [0, 255] to [0, 1]3 x_tensor = torch.as_tensor(images / 255).float()4 y_tensor = torch.as_tensor(labels).long()56 # Uses index_splitter to generate indices for training and7 # validation sets8 train_idx, val_idx = index_splitter(len(x_tensor), [80, 20])9 # Uses indices to perform the split10 x_train_tensor = x_tensor[train_idx]11 y_train_tensor = y_tensor[train_idx]12 x_val_tensor = x_tensor[val_idx]13 y_val_tensor = y_tensor[val_idx]1415 # We're not doing any data augmentation now16 train_composer = Compose([Normalize(mean=(.5,), std=(.5,))])17 val_composer = Compose([Normalize(mean=(.5,), std=(.5,))])1819 # Uses custom dataset to apply composed transforms to each set20 train_dataset = TransformedTensorDataset(21 x_train_tensor, y_train_tensor,22 transform=train_composer23 )24 val_dataset = TransformedTensorDataset(25 x_val_tensor, y_val_tensor,26 transform=val_composer27 )2829 # Builds a weighted random sampler to handle imbalanced classes30 sampler = make_balanced_sampler(y_train_tensor)3132 # Uses sampler in the training set to get a balanced data loader33 train_loader = DataLoader(34 dataset=train_dataset, batch_size=16,35 sampler=sampler36 )37 val_loader = DataLoader(dataset=val_dataset, batch_size=16)Before defining a model to classify our images, we need to discuss something else:the loss function.374 | Chapter 5: Convolutions
- Page 348 and 349: Figure 4.18 - Losses (before and af
- Page 350 and 351: Equation 4.3 - Activation functions
- Page 352 and 353: Helper Function #41 def index_split
- Page 354 and 355: Model Configuration1 # Sets learnin
- Page 356 and 357: Bonus ChapterFeature SpaceThis chap
- Page 358 and 359: Affine TransformationsAn affine tra
- Page 360 and 361: Figure B.3 - Annotated model diagra
- Page 362 and 363: Figure B.5 - In the beginning…But
- Page 364 and 365: OK, now we can clearly see a differ
- Page 366 and 367: In the model above, the sigmoid fun
- Page 368 and 369: the more dimensions, the more separ
- Page 370 and 371: import randomimport numpy as npfrom
- Page 372 and 373: identity = np.array([[[[0, 0, 0],[0
- Page 374 and 375: Figure 5.4 - Striding the image, on
- Page 376 and 377: Output-----------------------------
- Page 378 and 379: Outputtensor([[[[9., 5., 0., 7.],[0
- Page 380 and 381: OutputParameter containing:tensor([
- Page 382 and 383: Moreover, notice that if we were to
- Page 384 and 385: In code, as usual, PyTorch gives us
- Page 386 and 387: Outputtensor([[[[5., 5., 0., 8., 7.
- Page 388 and 389: edge = np.array([[[[0, 1, 0],[1, -4
- Page 390 and 391: A pooling kernel of two-by-two resu
- Page 392 and 393: Outputtensor([[22., 23., 11., 24.,
- Page 394 and 395: Figure 5.15 - LeNet-5 architectureS
- Page 396 and 397: • second block: produces 16-chann
- Page 400 and 401: LossNew problem, new loss. Since we
- Page 402 and 403: Outputtensor([4.0000, 1.0000, 0.500
- Page 404 and 405: The loss only considers the predict
- Page 406 and 407: Outputtensor([[-1.5229, -0.3146, -2
- Page 408 and 409: IMPORTANT: I can’t stress this en
- Page 410 and 411: figures at the beginning of this ch
- Page 412 and 413: The three units in the output layer
- Page 414 and 415: StepByStep Method@staticmethoddef _
- Page 416 and 417: The meow() method is totally indepe
- Page 418 and 419: StepByStep Methoddef visualize_filt
- Page 420 and 421: dummy_model = nn.Linear(1, 1)dummy_
- Page 422 and 423: dummy_listOutput[(Linear(in_feature
- Page 424 and 425: Output{Conv2d(1, 1, kernel_size=(3,
- Page 426 and 427: will be the externally defined vari
- Page 428 and 429: Removing Hookssbs_cnn1.remove_hooks
- Page 430 and 431: return figsetattr(StepByStep, 'visu
- Page 432 and 433: Figure 5.22 - Feature maps (classif
- Page 434 and 435: classification: The predicted class
- Page 436 and 437: convolutional layers to our model a
- Page 438 and 439: Capturing Outputsfeaturizer_layers
- Page 440 and 441: the filters learned by the model pr
- Page 442 and 443: given chapter are imported at its v
- Page 444 and 445: Data PreparationThe data preparatio
- Page 446 and 447: model anyway. We’ll use it to com
Data Preparation
1 # Builds tensors from numpy arrays BEFORE split
2 # Modifies the scale of pixel values from [0, 255] to [0, 1]
3 x_tensor = torch.as_tensor(images / 255).float()
4 y_tensor = torch.as_tensor(labels).long()
5
6 # Uses index_splitter to generate indices for training and
7 # validation sets
8 train_idx, val_idx = index_splitter(len(x_tensor), [80, 20])
9 # Uses indices to perform the split
10 x_train_tensor = x_tensor[train_idx]
11 y_train_tensor = y_tensor[train_idx]
12 x_val_tensor = x_tensor[val_idx]
13 y_val_tensor = y_tensor[val_idx]
14
15 # We're not doing any data augmentation now
16 train_composer = Compose([Normalize(mean=(.5,), std=(.5,))])
17 val_composer = Compose([Normalize(mean=(.5,), std=(.5,))])
18
19 # Uses custom dataset to apply composed transforms to each set
20 train_dataset = TransformedTensorDataset(
21 x_train_tensor, y_train_tensor,
22 transform=train_composer
23 )
24 val_dataset = TransformedTensorDataset(
25 x_val_tensor, y_val_tensor,
26 transform=val_composer
27 )
28
29 # Builds a weighted random sampler to handle imbalanced classes
30 sampler = make_balanced_sampler(y_train_tensor)
31
32 # Uses sampler in the training set to get a balanced data loader
33 train_loader = DataLoader(
34 dataset=train_dataset, batch_size=16,
35 sampler=sampler
36 )
37 val_loader = DataLoader(dataset=val_dataset, batch_size=16)
Before defining a model to classify our images, we need to discuss something else:
the loss function.
374 | Chapter 5: Convolutions