22.02.2024 Views

Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

To make it clear: In this chapter, we’re dealing with a single-label

binary classification (we have only one label per data point), and

the label is binary (there are only two possible values for it, zero

or one). If the label is zero, we say it belongs to the negative class.

If the label is one, it belongs to the positive class.

Please do not confuse the positive and negative classes of our

single label with c, the so-called class number in the

documentation. That c corresponds to the number of different

labels associated with a data point. In our example, c = 1.

You can use this argument to handle imbalanced datasets, but

there’s more to it than meets the eye. We’ll get back to it in the

next sub-section.

Enough talking (or writing!): Let’s see how to use this loss in code. We start by

creating the loss function itself:

loss_fn_logits = nn.BCEWithLogitsLoss(reduction='mean')

loss_fn_logits

Output

BCEWithLogitsLoss()

Next, we use logits and labels to compute the loss. Following the same principle as

before, logits first, then labels. To keep the example consistent, let’s get the values

of the logits corresponding to the probabilities we used before, 0.9 and 0.2, using

our log_odds_ratio() function:

Loss | 227

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!