09.05.2023 Views

pdfcoffee

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Chapter 2

print (dz_dx)

print (dy_dx)

del g # Drop the reference to the tape

2. tf.gradient_function(): This returns a function that computes the

derivatives of its input function parameter with respect to its arguments.

3. tf.value_and_gradients_function(): This returns the value from the

input function in addition to the list of derivatives of the input function with

respect to its arguments.

4. tf.implicit_gradients(): This computes the gradients of the outputs

of the input function with regards to all trainable variables these outputs

depend on.

Let's see a skeleton of a custom gradient computation where a model is given

as input, and the training steps are computing total_loss = pred_loss +

regularization_loss. The decorator @tf.function is used for AutoGraph, and

tape.gradient() and apply_gradients() are used to compute and apply the

gradients:

@tf.function

def train_step(inputs, labels):

with tf.GradientTape() as tape:

predictions = model(inputs, training=True)

regularization_loss = // TBD according to the problem

pred_loss = // TBD according to the problem

total_loss = pred_loss + regularization_loss

gradients = tape.gradient(total_loss, model.trainable_variables)

optimizer.apply_gradients(zip(gradients, model.trainable_variables))

Then, the training step train_step(inputs, labels) is applied for each epoch, for

each input and its associated label in train_data:

for epoch in range(NUM_EPOCHS):

for inputs, labels in train_data:

train_step(inputs, labels)

print("Finished epoch", epoch)

So, put simply, GradientTape() allows us to control and change how the training

process is performed internally. In Chapter 9, Autoencoders you will see a more

concrete example of using GradientTape() for training autoencoders.

[ 75 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!