22.02.2024 Views

Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Imports

For the sake of organization, all libraries needed throughout the code used in any

given chapter are imported at its very beginning. For this chapter, we’ll need the

following imports:

import numpy as np

from PIL import Image

import torch

import torch.optim as optim

import torch.nn as nn

import torch.nn.functional as F

from torch.utils.data import DataLoader, Dataset, random_split, \

TensorDataset

from torchvision.transforms import Compose, ToTensor, Normalize, \

Resize, ToPILImage, CenterCrop, RandomResizedCrop

from torchvision.datasets import ImageFolder

from torchvision.models import alexnet, resnet18, inception_v3

from torchvision.models.alexnet import model_urls

from torchvision.models.utils import load_state_dict_from_url

from stepbystep.v3 import StepByStep

Transfer Learning

In the previous chapter, I called a model fancier just because it had not one, but

two convolutional blocks, and dropout layers as well. Truth be told, this is far from

fancy—really fancy models have tens of convolutional blocks and other neat

architectural tricks that make them really powerful. They have many million

parameters and require not only humongous amounts of data but also thousands

of (expensive) GPU hours for training.

I don’t know about you, but I have neither! So, what’s left to do? Transfer learning

to the rescue!

The idea is quite simple. First, some big tech company, which has access to virtually

infinite amounts of data and computing power, develops and trains a huge model

for their own purpose. Next, once it is trained, its architecture and the

Transfer Learning | 499

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!