Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub
Figure 5.4 - Striding the image, one step at a timeThe size of the movement, in pixels, is called a stride. In ourexample, the stride is one.In code, it means we’re changing the slice of the input image:new_region = single[:, :, 0:3, (0+1):(3+1)]But the operation remains the same: First, an element-wise multiplication, andthen adding up the elements of the resulting matrix.Figure 5.5 - Element-wise multiplicationnew_filtered_region = new_region * identitynew_total = new_filtered_region.sum()new_totalConvolutions | 349
Output5Great! We have a second pixel value to add to our resulting image.Figure 5.6 - Taking one step to the rightWe can keep moving the gray region to the right until we can’t move it anymore.Figure 5.7 - An invalid step!The fourth step to the right will actually place the region partially outside theinput image. That’s a big no-no!last_horizontal_region = single[:, :, 0:3, (0+4):(3+4)]The selected region does not match the shape of the filter anymore. So, if we try toperform the element-wise multiplication, it fails:last_horizontal_region * identity350 | Chapter 5: Convolutions
- Page 324 and 325: What does our model look like? Visu
- Page 326 and 327: Model TrainingLet’s train our mod
- Page 328 and 329: preceding hidden layer to compute i
- Page 330 and 331: fig = sbs_nn.plot_losses()Figure 4.
- Page 332 and 333: Equation 4.2 - Equivalence of deep
- Page 334 and 335: w_nn_equiv = w_nn_output.mm(w_nn_hi
- Page 336 and 337: Weights as PixelsDuring data prepar
- Page 338 and 339: is only 0.25 (for z = 0) and that i
- Page 340 and 341: nn.Tanh()(dummy_z)Outputtensor([-0.
- Page 342 and 343: dummy_z = torch.tensor([-3., 0., 3.
- Page 344 and 345: As you can see, in PyTorch the coef
- Page 346 and 347: Figure 4.16 - Deep model (for real)
- Page 348 and 349: Figure 4.18 - Losses (before and af
- Page 350 and 351: Equation 4.3 - Activation functions
- Page 352 and 353: Helper Function #41 def index_split
- Page 354 and 355: Model Configuration1 # Sets learnin
- Page 356 and 357: Bonus ChapterFeature SpaceThis chap
- Page 358 and 359: Affine TransformationsAn affine tra
- Page 360 and 361: Figure B.3 - Annotated model diagra
- Page 362 and 363: Figure B.5 - In the beginning…But
- Page 364 and 365: OK, now we can clearly see a differ
- Page 366 and 367: In the model above, the sigmoid fun
- Page 368 and 369: the more dimensions, the more separ
- Page 370 and 371: import randomimport numpy as npfrom
- Page 372 and 373: identity = np.array([[[[0, 0, 0],[0
- Page 376 and 377: Output-----------------------------
- Page 378 and 379: Outputtensor([[[[9., 5., 0., 7.],[0
- Page 380 and 381: OutputParameter containing:tensor([
- Page 382 and 383: Moreover, notice that if we were to
- Page 384 and 385: In code, as usual, PyTorch gives us
- Page 386 and 387: Outputtensor([[[[5., 5., 0., 8., 7.
- Page 388 and 389: edge = np.array([[[[0, 1, 0],[1, -4
- Page 390 and 391: A pooling kernel of two-by-two resu
- Page 392 and 393: Outputtensor([[22., 23., 11., 24.,
- Page 394 and 395: Figure 5.15 - LeNet-5 architectureS
- Page 396 and 397: • second block: produces 16-chann
- Page 398 and 399: Transformed Dataset1 class Transfor
- Page 400 and 401: LossNew problem, new loss. Since we
- Page 402 and 403: Outputtensor([4.0000, 1.0000, 0.500
- Page 404 and 405: The loss only considers the predict
- Page 406 and 407: Outputtensor([[-1.5229, -0.3146, -2
- Page 408 and 409: IMPORTANT: I can’t stress this en
- Page 410 and 411: figures at the beginning of this ch
- Page 412 and 413: The three units in the output layer
- Page 414 and 415: StepByStep Method@staticmethoddef _
- Page 416 and 417: The meow() method is totally indepe
- Page 418 and 419: StepByStep Methoddef visualize_filt
- Page 420 and 421: dummy_model = nn.Linear(1, 1)dummy_
- Page 422 and 423: dummy_listOutput[(Linear(in_feature
Output
5
Great! We have a second pixel value to add to our resulting image.
Figure 5.6 - Taking one step to the right
We can keep moving the gray region to the right until we can’t move it anymore.
Figure 5.7 - An invalid step!
The fourth step to the right will actually place the region partially outside the
input image. That’s a big no-no!
last_horizontal_region = single[:, :, 0:3, (0+4):(3+4)]
The selected region does not match the shape of the filter anymore. So, if we try to
perform the element-wise multiplication, it fails:
last_horizontal_region * identity
350 | Chapter 5: Convolutions