09.05.2023 Views

pdfcoffee

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Chapter 9

And then we reshape the tensors from 2D to 1D. We employ the from_tensor_

slices to generate slices of tensors. Also note that here we are not using one-hot

encoded labels; this is the case because we are not using labels to train the network.

Autoencoders learn via unsupervised learning:

(x_train, _), (x_test, _) = K.datasets.mnist.load_data()

x_train = x_train / 255.

x_test = x_test / 255.

x_train = x_train.astype(np.float32)

x_test = x_test.astype(np.float32)

x_train = np.reshape(x_train, (x_train.shape[0], 784))

x_test = np.reshape(x_test, (x_test.shape[0], 784))

training_dataset = tf.data.Dataset.from_tensor_slices(x_train).

batch(batch_size)

Now we instantiate our autoencoder model object and define the loss and optimizers

to be used for training. Observe the loss carefully; it is simply the difference

between the original image and the reconstructed image. You may find that

the term reconstruction loss is also used to describe it in many books and papers:

autoencoder = Autoencoder(hidden_dim=hidden_dim, original_

dim=original_dim)

opt = tf.keras.optimizers.Adam(learning_rate=1e-2)

def loss(preds, real):

return tf.reduce_mean(tf.square(tf.subtract(preds, real)))

Instead of using the auto-training loop, for our custom autoencoder model we will

define a custom training. We use tf.GradientTape to record the gradients as they

are calculated and implicitly apply the gradients to all the trainable variables of

our model:

def train(loss, model, opt, original):

with tf.GradientTape() as tape:

preds = model(original)

reconstruction_error = loss(preds, original)

gradients = tape.gradient(reconstruction_error, model.

trainable_variables)

gradient_variables = zip(gradients, model.trainable_variables)

opt.apply_gradients(gradient_variables)

return reconstruction_error

The preceding train() function will be invoked in a training loop, with the dataset

fed to the model in batches:

def train_loop(model, opt, loss, dataset, epochs=20):

for epoch in range(epochs):

[ 351 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!