22.02.2024 Views

Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Figure 1.6 - Now parameter "b" does NOT have its gradient computed, but it is STILL used in

computation

Unsurprisingly, the blue box corresponding to parameter b is no more!

Simple enough: No gradients, no graph!

The best thing about the dynamic computation graph is that you can make it as

complex as you want it. You can even use control flow statements (e.g., if

statements) to control the flow of the gradients.

Figure 1.7 shows an example of this. And yes, I do know that the computation itself

is complete nonsense!

b = torch.randn(1, requires_grad=True, \

dtype=torch.float, device=device)

w = torch.randn(1, requires_grad=True, \

dtype=torch.float, device=device)

yhat = b + w * x_train_tensor

error = yhat - y_train_tensor

loss = (error ** 2).mean()

# this makes no sense!!

if loss > 0:

yhat2 = w * x_train_tensor

error2 = yhat2 - y_train_tensor

# neither does this!!

loss += error2.mean()

make_dot(loss)

Dynamic Computation Graph | 95

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!