22.02.2024 Views

Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

torch.manual_seed(13)

# Model Configuration

model_cnn2_nodrop = CNN2(n_feature=5, p=0.0)

multi_loss_fn = nn.CrossEntropyLoss(reduction='mean')

optimizer_cnn2_nodrop = optim.Adam(

model_cnn2_nodrop.parameters(), lr=3e-4

)

# Model Training

sbs_cnn2_nodrop = StepByStep(

model_cnn2_nodrop, multi_loss_fn, optimizer_cnn2_nodrop

)

sbs_cnn2_nodrop.set_loaders(train_loader, val_loader)

sbs_cnn2_nodrop.train(10)

Then, we can plot the losses of the model above (no dropout) together with the

losses from our previous model (30% dropout):

Figure 6.11 - Losses (with and without regularization)

This is actually a very nice depiction of the regularizing effect of using dropout:

• Training loss is higher with dropout—after all, dropout makes training harder.

• Validation loss is lower with dropout—it means that the model is generalizing

better and achieving a better performance on unseen data, which is the whole

point of using a regularization method like dropout.

We can also observe this effect by looking at the accuracy for both sets and

Model Training | 441

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!