Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub
Higher-Order FunctionsAlthough this is more of a coding topic, I believe it is necessary to have a goodgrasp on how higher-order functions work to fully benefit from Python’scapabilities and make the best out of our code.I will illustrate higher-order functions with an example so that you can gain aworking knowledge of it, but I am not delving any deeper into the topic, as itis outside the scope of this book.Let’s say we’d like to build a series of functions, each performing anexponentiation to a given power. The code would look like this:def square(x):return x ** 2def cube(x):return x ** 3def fourth_power(x):return x ** 4# and so on and so forth...Well, clearly there is a higher structure to this:• every function takes a single argument x, which is the number we’d liketo exponentiate• every function performs the same operation, an exponentiation, buteach function has a different exponentOne way of solving this is to make the exponent an explicit argument, justlike the code below:def generic_exponentiation(x, exponent):return x ** exponentRethinking the Training Loop | 127
That’s perfectly fine, and it works quite well. But it also requires that youspecify the exponent every time you call the function. There must be anotherway! Of course, there is; that’s the purpose of this section!We need to build another (higher-order) function to build those functions(square, cube, etc.) for us. The (higher-order) function is just a functionbuilder. But how do we do that?First, let’s build the "skeleton" of the functions we are trying to generate; theyall take a single argument x, and they all perform an exponentiation, eachusing a different exponent.Fine. It should look like this:def skeleton_exponentiation(x):return x ** exponentIf you try calling this function with any x, say, skeleton_exponentiation(2),you’ll get the following error:skeleton_exponentiation(2)OutputNameError: name 'exponent' is not definedThis is expected: Your "skeleton" function has no idea what the variableexponent is! And that’s what the higher-order function is going toaccomplish.We "wrap" our skeleton function with a higher-order function (which willbuild the desired functions). Let’s call it, rather unimaginatively,exponentiation_builder(). What are its arguments, if any? Well, we’retrying to tell our skeleton function what its exponent should be, so let’sstart with that!128 | Chapter 2: Rethinking the Training Loop
- Page 102 and 103: dummy_array = np.array([1, 2, 3])du
- Page 104 and 105: n_cudas = torch.cuda.device_count()
- Page 106 and 107: back_to_numpy = x_train_tensor.nump
- Page 108 and 109: I am assuming you’d like to use y
- Page 110 and 111: Outputtensor([0.1940], device='cuda
- Page 112 and 113: print(error.requires_grad, yhat.req
- Page 114 and 115: Output(tensor([0.], device='cuda:0'
- Page 116 and 117: 56 # need to tell it to let it go..
- Page 118 and 119: computation.If you chose "Local Ins
- Page 120 and 121: Figure 1.6 - Now parameter "b" does
- Page 122 and 123: There are many optimizers: SGD is t
- Page 124 and 125: 41 optimizer.zero_grad() 34243 prin
- Page 126 and 127: Notebook Cell 1.8 - PyTorch’s los
- Page 128 and 129: Outputarray(0.00804466, dtype=float
- Page 130 and 131: Let’s build a proper (yet simple)
- Page 132 and 133: "What do we need this for?"It turns
- Page 134 and 135: 1 Instantiating a model2 What IS th
- Page 136 and 137: In the __init__() method, we create
- Page 138 and 139: LayersA Linear model can be seen as
- Page 140 and 141: There are MANY different layers tha
- Page 142 and 143: We use magic, just like that:%run -
- Page 144 and 145: • Step 1: compute model’s predi
- Page 146 and 147: RecapFirst of all, congratulations
- Page 148 and 149: Chapter 2Rethinking the Training Lo
- Page 150 and 151: Let’s take a look at the code onc
- Page 154 and 155: def exponentiation_builder(exponent
- Page 156 and 157: Apart from returning the loss value
- Page 158 and 159: Our code should look like this; see
- Page 160 and 161: There is no need to load the whole
- Page 162 and 163: but if we want to get serious about
- Page 164 and 165: How does this change our code so fa
- Page 166 and 167: Run - Model Training V2%run -i mode
- Page 168 and 169: piece of code that’s going to be
- Page 170 and 171: for it. We could do the same for th
- Page 172 and 173: EvaluationHow can we evaluate the m
- Page 174 and 175: And then, we update our model confi
- Page 176 and 177: Run - Model Training V4%run -i mode
- Page 178 and 179: Loading Extension# Load the TensorB
- Page 180 and 181: browser, you’ll likely see someth
- Page 182 and 183: model’s graph (not quite the same
- Page 184 and 185: Figure 2.5 - Scalars on TensorBoard
- Page 186 and 187: Define - Model Training V51 %%write
- Page 188 and 189: If, by any chance, you ended up wit
- Page 190 and 191: The procedure is exactly the same,
- Page 192 and 193: soon, so please bear with me for no
- Page 194 and 195: After recovering our model’s stat
- Page 196 and 197: Run - Model Configuration V31 # %lo
- Page 198 and 199: This is the general structure you
- Page 200 and 201: Chapter 2.1Going ClassySpoilersIn t
That’s perfectly fine, and it works quite well. But it also requires that you
specify the exponent every time you call the function. There must be another
way! Of course, there is; that’s the purpose of this section!
We need to build another (higher-order) function to build those functions
(square, cube, etc.) for us. The (higher-order) function is just a function
builder. But how do we do that?
First, let’s build the "skeleton" of the functions we are trying to generate; they
all take a single argument x, and they all perform an exponentiation, each
using a different exponent.
Fine. It should look like this:
def skeleton_exponentiation(x):
return x ** exponent
If you try calling this function with any x, say, skeleton_exponentiation(2),
you’ll get the following error:
skeleton_exponentiation(2)
Output
NameError: name 'exponent' is not defined
This is expected: Your "skeleton" function has no idea what the variable
exponent is! And that’s what the higher-order function is going to
accomplish.
We "wrap" our skeleton function with a higher-order function (which will
build the desired functions). Let’s call it, rather unimaginatively,
exponentiation_builder(). What are its arguments, if any? Well, we’re
trying to tell our skeleton function what its exponent should be, so let’s
start with that!
128 | Chapter 2: Rethinking the Training Loop