09.05.2023 Views

pdfcoffee

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Chapter 10

Figure 11: Summary of the VAE model

Now we train the model. We define our loss function, which is the sum of the

reconstruction loss and KL divergence loss:

dataset = tf.data.Dataset.from_tensor_slices(x_train)

dataset = dataset.shuffle(batch_size * 5).batch(batch_size)

num_batches = x_train.shape[0] // batch_size

for epoch in range(num_epochs):

for step, x in enumerate(dataset):

x = tf.reshape(x, [-1, image_size])

with tf.GradientTape() as tape:

# Forward pass

x_reconstruction_logits, mu, log_var = model(x)

# Compute reconstruction loss and kl divergence

# Scaled by 'image_size' for each individual pixel.

reconstruction_loss = tf.nn.sigmoid_cross_entropy_with_

logits(labels=x, logits=x_reconstruction_logits)

reconstruction_loss = tf.reduce_sum(reconstruction_loss) /

batch_size

kl_div = - 0.5 * tf.reduce_sum(1. + log_var -

tf.square(mu) - tf.exp(log_var), axis=-1)

kl_div = tf.reduce_mean(kl_div)

[ 403 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!