Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub
Figure 2.5 - Scalars on TensorBoardNot very useful, eh? We need to incorporate these elements into our modelconfiguration and model training codes, which look like this now:Run - Data Preparation V2%run -i data_preparation/v2.pyTensorBoard | 159
Define - Model Configuration V31 %%writefile model_configuration/v3.py23 device = 'cuda' if torch.cuda.is_available() else 'cpu'45 # Sets learning rate - this is "eta" ~ the "n"-like Greek letter6 lr = 0.178 torch.manual_seed(42)9 # Now we can create a model and send it at once to the device10 model = nn.Sequential(nn.Linear(1, 1)).to(device)1112 # Defines an SGD optimizer to update the parameters13 optimizer = optim.SGD(model.parameters(), lr=lr)1415 # Defines an MSE loss function16 loss_fn = nn.MSELoss(reduction='mean')1718 # Creates the train_step function for our model,19 # loss function and optimizer20 train_step_fn = make_train_step_fn(model, loss_fn, optimizer)2122 # Creates the val_step function for our model and loss function23 val_step_fn = make_val_step_fn(model, loss_fn)2425 # Creates a Summary Writer to interface with TensorBoard26 writer = SummaryWriter('runs/simple_linear_regression') 127 # Fetches a single mini-batch so we can use add_graph28 x_dummy, y_dummy = next(iter(train_loader))29 writer.add_graph(model, x_dummy.to(device))1 Creating SummaryWriter to interface with TensorBoardRun - Model Configuration V3%run -i model_configuration/v3.py160 | Chapter 2: Rethinking the Training Loop
- Page 134 and 135: 1 Instantiating a model2 What IS th
- Page 136 and 137: In the __init__() method, we create
- Page 138 and 139: LayersA Linear model can be seen as
- Page 140 and 141: There are MANY different layers tha
- Page 142 and 143: We use magic, just like that:%run -
- Page 144 and 145: • Step 1: compute model’s predi
- Page 146 and 147: RecapFirst of all, congratulations
- Page 148 and 149: Chapter 2Rethinking the Training Lo
- Page 150 and 151: Let’s take a look at the code onc
- Page 152 and 153: Higher-Order FunctionsAlthough this
- Page 154 and 155: def exponentiation_builder(exponent
- Page 156 and 157: Apart from returning the loss value
- Page 158 and 159: Our code should look like this; see
- Page 160 and 161: There is no need to load the whole
- Page 162 and 163: but if we want to get serious about
- Page 164 and 165: How does this change our code so fa
- Page 166 and 167: Run - Model Training V2%run -i mode
- Page 168 and 169: piece of code that’s going to be
- Page 170 and 171: for it. We could do the same for th
- Page 172 and 173: EvaluationHow can we evaluate the m
- Page 174 and 175: And then, we update our model confi
- Page 176 and 177: Run - Model Training V4%run -i mode
- Page 178 and 179: Loading Extension# Load the TensorB
- Page 180 and 181: browser, you’ll likely see someth
- Page 182 and 183: model’s graph (not quite the same
- Page 186 and 187: Define - Model Training V51 %%write
- Page 188 and 189: If, by any chance, you ended up wit
- Page 190 and 191: The procedure is exactly the same,
- Page 192 and 193: soon, so please bear with me for no
- Page 194 and 195: After recovering our model’s stat
- Page 196 and 197: Run - Model Configuration V31 # %lo
- Page 198 and 199: This is the general structure you
- Page 200 and 201: Chapter 2.1Going ClassySpoilersIn t
- Page 202 and 203: # A completely empty (and useless)
- Page 204 and 205: # These attributes are defined here
- Page 206 and 207: # Creates the train_step function f
- Page 208 and 209: # Builds function that performs a s
- Page 210 and 211: setattrThe setattr function sets th
- Page 212 and 213: See? We effectively modified the un
- Page 214 and 215: the random seed as arguments.This s
- Page 216 and 217: The current state of development of
- Page 218 and 219: Lossesdef plot_losses(self):fig = p
- Page 220 and 221: Run - Data Preparation V21 # %load
- Page 222 and 223: Model TrainingWe start by instantia
- Page 224 and 225: Making PredictionsLet’s make up s
- Page 226 and 227: OutputOrderedDict([('0.weight', ten
- Page 228 and 229: Run - Data Preparation V21 # %load
- Page 230 and 231: • defining our StepByStep class
- Page 232 and 233: import numpy as npimport torchimpor
Figure 2.5 - Scalars on TensorBoard
Not very useful, eh? We need to incorporate these elements into our model
configuration and model training codes, which look like this now:
Run - Data Preparation V2
%run -i data_preparation/v2.py
TensorBoard | 159