22.02.2024 Views

Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Output

Linear(in_features=1, out_features=1, bias=True)

Do we still have our b and w parameters? Sure, we do:

linear.state_dict()

Output

OrderedDict([('weight', tensor([[-0.2191]])),

('bias', tensor([0.2018]))])

So, our former parameter b is the bias, and our former parameter w is the weight

(your values will be different since I haven’t set up a random seed for this example).

Now, let’s use PyTorch’s Linear model as an attribute of our own, thus creating a

nested model.

You are not limited to defining parameters, though; models can

contain other models as their attributes as well, so you can easily

nest them. We’ll see an example of this shortly.

Even though this clearly is a contrived example, since we are pretty much wrapping

the underlying model without adding anything useful (or, at all!) to it, it illustrates the

concept well.

Notebook Cell 1.11 - Building a model using PyTorch’s Linear model

class MyLinearRegression(nn.Module):

def __init__(self):

super().__init__()

# Instead of our custom parameters, we use a Linear model

# with a single input and a single output

self.linear = nn.Linear(1, 1)

def forward(self, x):

# Now it only makes a call

self.linear(x)

110 | Chapter 1: A Simple Regression Problem

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!