Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub
We can convert the former into the latter using a one-liner:example_tensor = torch.as_tensor(example / 255).float()Moreover, we can use this line of code to convert our whole Numpy dataset intotensors so they become an appropriate input to our composed transformation.Data PreparationThe first step of data preparation, as in previous chapters, is to convert ourfeatures and labels from Numpy arrays to PyTorch tensors:# Builds tensors from numpy arrays BEFORE splitx_tensor = torch.as_tensor(images / 255).float()y_tensor = torch.as_tensor(labels.reshape(-1, 1)).float()The only difference is that we scaled the images to get them into the expected [0.0,1.0] range.Dataset TransformsNext, we use both tensors to build a Dataset, but not a simple TensorDataset likebefore. Once again, we’ll build our own custom dataset that is capable of handlingtransformations. Its code is actually quite simple:Data Preparation | 283
Transformed Dataset1 class TransformedTensorDataset(Dataset):2 def __init__(self, x, y, transform=None):3 self.x = x4 self.y = y5 self.transform = transform67 def __getitem__(self, index):8 x = self.x[index]910 if self.transform:11 x = self.transform(x)1213 return x, self.y[index]1415 def __len__(self):16 return len(self.x)It takes three arguments: a tensor for features (x), another tensor for labels (y),and an optional transformation. These arguments are then stored as attributes ofthe class. Of course, if no transformation is given, it will behave similarly to a regularTensorDataset.The main difference is in the __getitem__() method: Instead of simply returningthe elements corresponding to a given index in both tensors, it transforms thefeatures, if a transformation is defined."Do I have to create a custom dataset to perform transformations?"Not necessarily, no. The ImageFolder dataset, which you’ll likely use for handlingreal images, handles transformations out of the box. The mechanism is essentiallythe same: If a transformation is defined, the dataset applies it to the images. Thepurpose of using yet another custom dataset here is to illustrate this mechanism.So, let’s redefine our composed transformations (so it actually flips the imagerandomly instead of every time) and create our dataset:284 | Chapter 4: Classifying Images
- Page 258 and 259: Figure 3.6 - Training and validatio
- Page 260 and 261: Outputarray([[0.5504593 ],[0.949995
- Page 262 and 263: decision boundary.Look at the expre
- Page 264 and 265: Are my data points separable?That
- Page 266 and 267: model = nn.Sequential()model.add_mo
- Page 268 and 269: It looks like this:Figure 3.10 - Sp
- Page 270 and 271: True and False Positives and Negati
- Page 272 and 273: tpr_fpr(cm_thresh50)Output(0.909090
- Page 274 and 275: The trade-off between precision and
- Page 276 and 277: Figure 3.13 - Using a low threshold
- Page 278 and 279: Figure 3.16 - Trade-offs for two di
- Page 280 and 281: thresholds do not necessarily inclu
- Page 282 and 283: actual data, it is as bad as it can
- Page 284 and 285: If you want to learn more about bot
- Page 286 and 287: Model Training1 n_epochs = 10023 sb
- Page 288 and 289: step in your journey! What’s next
- Page 290 and 291: Chapter 4Classifying ImagesSpoilers
- Page 292 and 293: Data GenerationOur images are quite
- Page 294 and 295: Images and ChannelsIn case you’re
- Page 296 and 297: image_rgb = np.stack([image_r, imag
- Page 298 and 299: That’s fairly straightforward; we
- Page 300 and 301: • Transformations based on Tensor
- Page 302 and 303: position of an object in a picture
- Page 304 and 305: Outputtensor([[[0., 0., 0., 1., 0.]
- Page 306 and 307: Outputtensor([[[-1., -1., -1., 1.,
- Page 310 and 311: composer = Compose([RandomHorizonta
- Page 312 and 313: Output<torch.utils.data.dataset.Sub
- Page 314 and 315: train_composer = Compose([RandomHor
- Page 316 and 317: The minority class should have the
- Page 318 and 319: train_loader = DataLoader(dataset=t
- Page 320 and 321: implemented in Chapter 2.1? Let’s
- Page 322 and 323: Let’s take one mini-batch of imag
- Page 324 and 325: What does our model look like? Visu
- Page 326 and 327: Model TrainingLet’s train our mod
- Page 328 and 329: preceding hidden layer to compute i
- Page 330 and 331: fig = sbs_nn.plot_losses()Figure 4.
- Page 332 and 333: Equation 4.2 - Equivalence of deep
- Page 334 and 335: w_nn_equiv = w_nn_output.mm(w_nn_hi
- Page 336 and 337: Weights as PixelsDuring data prepar
- Page 338 and 339: is only 0.25 (for z = 0) and that i
- Page 340 and 341: nn.Tanh()(dummy_z)Outputtensor([-0.
- Page 342 and 343: dummy_z = torch.tensor([-3., 0., 3.
- Page 344 and 345: As you can see, in PyTorch the coef
- Page 346 and 347: Figure 4.16 - Deep model (for real)
- Page 348 and 349: Figure 4.18 - Losses (before and af
- Page 350 and 351: Equation 4.3 - Activation functions
- Page 352 and 353: Helper Function #41 def index_split
- Page 354 and 355: Model Configuration1 # Sets learnin
- Page 356 and 357: Bonus ChapterFeature SpaceThis chap
We can convert the former into the latter using a one-liner:
example_tensor = torch.as_tensor(example / 255).float()
Moreover, we can use this line of code to convert our whole Numpy dataset into
tensors so they become an appropriate input to our composed transformation.
Data Preparation
The first step of data preparation, as in previous chapters, is to convert our
features and labels from Numpy arrays to PyTorch tensors:
# Builds tensors from numpy arrays BEFORE split
x_tensor = torch.as_tensor(images / 255).float()
y_tensor = torch.as_tensor(labels.reshape(-1, 1)).float()
The only difference is that we scaled the images to get them into the expected [0.0,
1.0] range.
Dataset Transforms
Next, we use both tensors to build a Dataset, but not a simple TensorDataset like
before. Once again, we’ll build our own custom dataset that is capable of handling
transformations. Its code is actually quite simple:
Data Preparation | 283