22.02.2024 Views

Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

argument of nn.BCEWithLogitsLoss(). To compensate for the imbalance, one can

set the weight to equal the ratio of negative to positive examples:

In our imbalanced dummy example, the result would be 3.0. This way, every point

in the positive class would have its corresponding loss multiplied by three. Since

there is a single label for each data point (c = 1), the tensor used as an argument for

pos_weight has only one element: tensor([3.0]). We could compute it like this:

n_neg = (dummy_imb_labels == 0).sum().float()

n_pos = (dummy_imb_labels == 1).sum().float()

pos_weight = (n_neg / n_pos).view(1,)

pos_weight

Output

tensor([3])

Now, let’s create yet another loss function, including the pos_weight argument this

time:

loss_fn_imb = nn.BCEWithLogitsLoss(

reduction='mean',

pos_weight=pos_weight

)

Then, we can use this weighted loss function to compute the loss for our

imbalanced dataset. I guess one would expect the same loss as before; after all,

this is a weighted loss. Right?

loss = loss_fn_imb(dummy_imb_logits, dummy_imb_labels)

loss

Loss | 229

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!